Vous êtes sur la page 1sur 280

Student Guide for

Operating and Managing Hitachi


Content Platform v8.x

TCI2743

Courseware Version 2.0


Hitachi Vantara
Corporate Headquarters Regional Contact Information
2845 Lafayette Street Americas: +1 866 374 5822 or info@hitachivantara.com
Santa Clara, CA 95050-2639 USA Europe, Middle East and Africa: +44 (0) 1753 618000 or info.emea@hitachivantara.com
www.HitachiVantara.com | community.HitachiVantara.com Asia Pacific: +852 3189 7900 or info.marketing.apac@hitachivantara.com

© Hitachi Vantara Corporation 2018. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd., Content Platform Anywhere and Hi-Track
are trademarks or registered trademarks of Hitachi Vantara Corporation. All other trademarks, service marks and company names are properties of their respective
owners.

ii
Table of Contents
Introduction ............................................................................................................. xiii
Welcome and Introductions ..................................................................................................................... xiii
Course Description ................................................................................................................................. xiv
Prerequisites .......................................................................................................................................... xiv
Course Objectives ................................................................................................................................... xv
Course Topics ......................................................................................................................................... xv
Learning Paths ....................................................................................................................................... xvi
Resources: Product Documents .............................................................................................................. xvii
Collaborate and Share ...........................................................................................................................xviii
Social Networking —Twitter Site .............................................................................................................. xix
Hitachi Self-Paced Learning Library ........................................................................................................... xx

1. Overview........................................................................................................... 1-1
Module Objectives ................................................................................................................................. 1-1
HCP: Object-Based Storage .................................................................................................................... 1-2
Hitachi Compute Platform Basics ........................................................................................................ 1-2
What Is an HCP Object? .................................................................................................................... 1-4
Multiple Custom Metadata Injection ................................................................................................... 1-5
Internal Object Representation .......................................................................................................... 1-6
How Users and Applications View Objects ........................................................................................... 1-7
Hitachi Content Platform Evolution ..................................................................................................... 1-8
Introduction to Tenants and Namespaces ........................................................................................... 1-9
Swift: Another Way to Use Your Storage Pool ................................................................................... 1-10
HCP Configurations.............................................................................................................................. 1-10
Unified HCP G10 Platform................................................................................................................ 1-11
HCP G10 With Local Storage............................................................................................................ 1-12
HCP G10 With Attached Storage ...................................................................................................... 1-13
HCP G10 SSD Performance Option ................................................................................................... 1-14
HCP S Node ................................................................................................................................... 1-15
HCP S10 ........................................................................................................................................ 1-16
HCP S30 ........................................................................................................................................ 1-17
HCP S Node ................................................................................................................................... 1-18
HCP S Series Storage Principles ....................................................................................................... 1-19
RAID Rebuild Principles ................................................................................................................... 1-19

iii
Table of Contents

HCP S Series Rebuild Principles ........................................................................................................ 1-20


HCP S Series Healing Properties ....................................................................................................... 1-20
Direct Write to HCP S10/S30 ........................................................................................................... 1-21
VMware Edition of HCP ................................................................................................................... 1-22
OpenStack KVM HCP-VM ................................................................................................................. 1-23
Feature Overview ................................................................................................................................ 1-23
Nondisruptive Service ..................................................................................................................... 1-24
HCP Objects – Protected ................................................................................................................. 1-25
HCP Objects – Secured ................................................................................................................... 1-25
Protection Concepts ........................................................................................................................ 1-26
Zero Copy Failover.......................................................................................................................... 1-28
Data Encryption ............................................................................................................................. 1-28
Time Settings Compliance Mode ...................................................................................................... 1-29
Compliance Features ........................................................................................................................... 1-30
Retention Times ............................................................................................................................. 1-30
Regulatory Compliance ................................................................................................................... 1-31
Retention Mode Selection for Tenants .............................................................................................. 1-32
Change Retention Mode for Namespace ........................................................................................... 1-33
Reviewing Retention ....................................................................................................................... 1-34
Default Retention Setting ................................................................................................................ 1-35
Privileged Delete or Purge ............................................................................................................... 1-36
Policies and Services ............................................................................................................................ 1-36
Services ......................................................................................................................................... 1-39
Default Service Schedule ................................................................................................................. 1-39
Service Descriptions........................................................................................................................ 1-40
Geographically Distributed Data Protection ....................................................................................... 1-41
Policy Descriptions .......................................................................................................................... 1-42
Module Summary ................................................................................................................................ 1-43
Module Review .................................................................................................................................... 1-44

2. Hardware Components ..................................................................................... 2-1


Module Objectives ................................................................................................................................. 2-1
HCP Components .................................................................................................................................. 2-2
HCP G10 Common Hardware ............................................................................................................. 2-2
HCP G10 Optional/Future Hardware ................................................................................................... 2-3
HCP G10 Ethernet Networking Options ............................................................................................... 2-4

iv
Table of Contents

HCP G10 1/10Gb BASE-T FE/1G BASE-T BE ........................................................................................ 2-5


HCP G10 10Gb SFP+ FE/1G BASE-T BE .............................................................................................. 2-5
HCP G10 10Gb BASE-T FE/10G SFP+ BE ............................................................................................ 2-6
HCP G10 10Gb SFP+ FE/10G SFP+ BE ............................................................................................... 2-6
Back-End Ethernet Switches .............................................................................................................. 2-8
Fibre Channel Networking ................................................................................................................. 2-9
Metadata Indexes on SSDs (Optional) ................................................................................................ 2-9
Racked and Rackless ...................................................................................................................... 2-10
HCP S10 Node................................................................................................................................ 2-11
HCP S30 Node – Server Module ....................................................................................................... 2-12
HCP S30 Node – Enclosure Unit ....................................................................................................... 2-14
Module Summary ................................................................................................................................ 2-16
Module Review .................................................................................................................................... 2-16

3. Network Configuration ..................................................................................... 3-1


Module Objectives ................................................................................................................................. 3-1
Network Interfaces ................................................................................................................................ 3-2
Networking ...................................................................................................................................... 3-2
LAN Connections Review ................................................................................................................... 3-3
HCP Connectivity: LAN and Fibre Channel ........................................................................................... 3-3
DNS Configuration ............................................................................................................................ 3-4
DNS Service ..................................................................................................................................... 3-4
Name Resolution .............................................................................................................................. 3-5
Name Resolution – Best Practice ........................................................................................................ 3-6
Shadow Master Functionality ............................................................................................................. 3-7
DNS Notify ....................................................................................................................................... 3-8
VLAN Configuration ............................................................................................................................... 3-9
Virtual LANs (VLANs) ........................................................................................................................ 3-9
HCP Integration With VLANs .............................................................................................................. 3-9
Network Segregation ...................................................................................................................... 3-10
SMC Advanced Settings ................................................................................................................... 3-10
SMC Network Configuration ............................................................................................................. 3-11
Create Network – Step 1 : Settings .................................................................................................. 3-11
Create Network – Step 2 : IP Configuration ...................................................................................... 3-12
Create Network – Step 3 : Review.................................................................................................... 3-13
Add Node IP Addresses ................................................................................................................... 3-13

v
Table of Contents

SMC Network View ......................................................................................................................... 3-14


SMC Node View .............................................................................................................................. 3-15
Link Aggregation and IPv6 Support .................................................................................................. 3-16
Link Aggregation ................................................................................................................................. 3-16
IPv4 Running Out Of Room ............................................................................................................. 3-18
IPv6 Support for HCP ...................................................................................................................... 3-18
Authentication With AD ................................................................................................................... 3-19
Support for Active Directory: Introduction.............................................................................................. 3-21
Support for Active Directory: Feature Details ..................................................................................... 3-21
Active Directory: Configuration ........................................................................................................ 3-22
Active Directory: Groups ................................................................................................................. 3-23
Module Summary ................................................................................................................................ 3-24
Module Review .................................................................................................................................... 3-24

4. Administration .................................................................................................. 4-1


Module Objectives ................................................................................................................................. 4-1
HCP Consoles ........................................................................................................................................ 4-2
How to Access HCP GUIs................................................................................................................... 4-2
System Management Console ............................................................................................................ 4-3
Tenant Management Console ............................................................................................................ 4-4
Namespace Browser ......................................................................................................................... 4-5
System Users ........................................................................................................................................ 4-5
User Roles: System Management Console ........................................................................................... 4-6
User Authentication .......................................................................................................................... 4-8
Starter Account ................................................................................................................................ 4-9
Tenant Users ........................................................................................................................................ 4-9
Tenant-Level Administration ............................................................................................................ 4-10
Tenant User Account ...................................................................................................................... 4-11
Tenant User Account Creation ......................................................................................................... 4-11
Data Access Permissions Example .................................................................................................... 4-12
Permission Masks ................................................................................................................................ 4-13
Permissions Classifications............................................................................................................... 4-14
System-Wide Permission Mask ......................................................................................................... 4-15
Tenant Permission Mask.................................................................................................................. 4-16
Namespace Permission Mask ........................................................................................................... 4-17
Permission Masks: Example ............................................................................................................. 4-17

vi
Table of Contents

Storage Component Administration ....................................................................................................... 4-18


Storage Overview ........................................................................................................................... 4-18
Storage Components ...................................................................................................................... 4-18
Storage Component Advanced Options ............................................................................................. 4-19
Storage Pools ................................................................................................................................. 4-20
Service Plans – Tiering Policy ........................................................................................................... 4-20
Service Plan Assignment and Utilization ............................................................................................ 4-21
Service Plan Wizards – Tier Editor .................................................................................................... 4-22
Service Plan Wizards – Import Creation ............................................................................................ 4-23
Storage Reports ............................................................................................................................. 4-23
Storage Retirement ........................................................................................................................ 4-24
Certificate Trust Store ..................................................................................................................... 4-24
HCP S10 and HCP S30 Nodes ............................................................................................................... 4-25
Manage HCP S10 and HCP S30 Nodes .............................................................................................. 4-25
HCP S10 Node – Manage S Nodes .................................................................................................... 4-26
HCP S Series Storage – Ingest Tier .................................................................................................. 4-27
Write Through to S Series Storage ................................................................................................... 4-28
Module Summary ................................................................................................................................ 4-31
Module Review .................................................................................................................................... 4-32

5. Ingestion Processes.......................................................................................... 5-1


Module Objectives ................................................................................................................................. 5-1
Namespace Browser .............................................................................................................................. 5-2
Namespace Browser: Objects ............................................................................................................ 5-2
CIFS and NFS........................................................................................................................................ 5-2
CIFS and NFS Support ...................................................................................................................... 5-3
Enable CIFS Protocol ........................................................................................................................ 5-4
Network Drive Mapping in Microsoft® Windows .................................................................................. 5-4
Microsoft Windows Mounted Disks ..................................................................................................... 5-5
CIFS Access: An Open Standards Approach......................................................................................... 5-5
Set Retention Period ......................................................................................................................... 5-6
Default Tenant ...................................................................................................................................... 5-6
Enable Creation of Default Tenant...................................................................................................... 5-7
Create Default Tenant or Namespace ................................................................................................. 5-8
HCP Data Migrator................................................................................................................................. 5-8
Overview ......................................................................................................................................... 5-9

vii
Table of Contents

Installation .................................................................................................................................... 5-10


Migration Panes ............................................................................................................................. 5-10
Namespace Profile Manager: Create Profile ....................................................................................... 5-11
Namespace Profile Manager: Edit or Delete Profile............................................................................. 5-11
Set Preferences: Policies ................................................................................................................. 5-12
Set Preferences: POSIX Metadata .................................................................................................... 5-12
Set Preferences: Owner .................................................................................................................. 5-13
HCP-DM CLI ................................................................................................................................... 5-13
REST API ............................................................................................................................................ 5-14
What Is RESTful Interface? ............................................................................................................. 5-14
Simplified REST Example ................................................................................................................. 5-15
HCP RESTful Interfaces ................................................................................................................... 5-15
Anatomy of Request ....................................................................................................................... 5-16
Using Programming Languages ........................................................................................................ 5-19
Hitachi S3 (HS3) API............................................................................................................................ 5-19
What Is HS3? ................................................................................................................................. 5-19
HS3 and Multipart Upload (MPU)...................................................................................................... 5-20
S3 Basic Concepts .......................................................................................................................... 5-21
How to Make S3 Requests ............................................................................................................... 5-22
OpenStack Concepts and Terminology .............................................................................................. 5-23
Module Summary ............................................................................................................................... 5-24
Module Review .................................................................................................................................... 5-24

6. Search Activities ............................................................................................... 6-1


Module Objectives ................................................................................................................................. 6-1
Overview .............................................................................................................................................. 6-2
Metadata Query Engine: Benefits ............................................................................................................ 6-2
Metadata Query Engine: Details .............................................................................................................. 6-3
Metadata Query Engine: Qualifications .................................................................................................... 6-4
MQE and HDDS Search Differences ......................................................................................................... 6-5
MQE Content Classes ............................................................................................................................. 6-6
Enable HCP MQE Search Facility ............................................................................................................. 6-8
Launch MQE GUI ................................................................................................................................... 6-9
Structured Query: Size Metadata ............................................................................................................ 6-9
Narrow Structured Search .................................................................................................................... 6-10
Narrowed Search Results ..................................................................................................................... 6-10

viii
Table of Contents

MQE Tool ........................................................................................................................................... 6-11


Module Summary ................................................................................................................................ 6-12
Module Review .................................................................................................................................... 6-12

7. Replication Activities ........................................................................................ 7-1


Module Objectives ................................................................................................................................. 7-1
Active – Passive Replication.................................................................................................................... 7-2
Active – Passive Replication Overview ................................................................................................ 7-2
Before You Begin.............................................................................................................................. 7-3
Required Steps for Replication ........................................................................................................... 7-3
Active – Active Replication ..................................................................................................................... 7-4
Two Replication Link Types ............................................................................................................... 7-4
Link Creation Wizard ......................................................................................................................... 7-4
Domain and Certificate Replication ..................................................................................................... 7-5
Fully Automated Collision Handling ..................................................................................................... 7-6
Querying Collisions With MQE ............................................................................................................ 7-9
Replication MAPI Support .................................................................................................................. 7-9
Implementation Notes Overview ...................................................................................................... 7-10
Active-Active Links Persist Metadata First .......................................................................................... 7-11
Limits, Performance and Networks ................................................................................................... 7-12
Failover .............................................................................................................................................. 7-12
Automatic Failover/Failback Options ................................................................................................. 7-13
Active-Active Failover Scenario 1 ...................................................................................................... 7-15
Active-Active Failover Scenario 2 ...................................................................................................... 7-15
Active-Passive Failover Scenario....................................................................................................... 7-16
Active-Active Failover Scenario ........................................................................................................ 7-17
Distributed Authoritative DNS Systems ............................................................................................. 7-18
Geographically Distributed Data Protection ....................................................................................... 7-18
What Is Geo Distributed Data Protection? ......................................................................................... 7-19
Geo-Protection Offers Several Benefits ............................................................................................. 7-19
Protection Types ............................................................................................................................ 7-20
Protection Types for Namespaces .................................................................................................... 7-20
Geo-Distributed Erasure Coding Service Processing ........................................................................... 7-21
Replication Topologies .................................................................................................................... 7-21
Considerations for Cross-Release Replication..................................................................................... 7-22
Working With Erasure Coding Topologies.......................................................................................... 7-23

ix
Table of Contents

Geo-EC Setup ................................................................................................................................ 7-23


Steps to Create a Geo-EC Configuration............................................................................................ 7-24
Replication Verification Service ............................................................................................................. 7-24
Replication Verification Service (RVS) ............................................................................................... 7-25
RVS: How Does It Work? ................................................................................................................ 7-26
RVS Setup ..................................................................................................................................... 7-27
RVS Running Status ........................................................................................................................ 7-28
RVS Results ................................................................................................................................... 7-29
Load Balancers.................................................................................................................................... 7-29
Load Balancer Overview .................................................................................................................. 7-30
Load Balancer With Single HCP ........................................................................................................ 7-31
Load Balancer With Pair of Replicated HCP ....................................................................................... 7-32
What About Distributed Sites? ......................................................................................................... 7-33
Global Traffic Manager (GTM) .......................................................................................................... 7-34
GTM With Replicated HCPs .............................................................................................................. 7-35
Global Traffic Manager .................................................................................................................... 7-35
Admin Commands ............................................................................................................................... 7-36
Admin Commands Overview ............................................................................................................ 7-36
Admin Commands Reference ........................................................................................................... 7-37
System Events .................................................................................................................................... 7-39
New System Events and Alerts......................................................................................................... 7-39
System Log Events – Reference ....................................................................................................... 7-40
Performance ....................................................................................................................................... 7-40
Performance Overview .................................................................................................................... 7-41
Module Summary ................................................................................................................................ 7-42
Module Review .................................................................................................................................... 7-42

8. Support Activities ............................................................................................. 8-1


Module Objectives ................................................................................................................................. 8-1
Chargeback .......................................................................................................................................... 8-2
Chargeback Features ........................................................................................................................ 8-2
Chargeback...................................................................................................................................... 8-3
Chargeback Metrics .......................................................................................................................... 8-4
Chargeback Reporting Fundamentals ................................................................................................. 8-6
System Logs ......................................................................................................................................... 8-7
Types of Logs ....................................................................................................................................... 8-8

x
Table of Contents

Log Management Controls ................................................................................................................. 8-9


Download Internal Log .................................................................................................................... 8-10
Log Download Enhancements .......................................................................................................... 8-11
Log Download Enhancements – MAPI............................................................................................... 8-12
Module Summary ................................................................................................................................ 8-14
Module Review .................................................................................................................................... 8-14

9. Solutions ........................................................................................................... 9-1


Module Objectives ................................................................................................................................. 9-1
HCP Solutions and Supported ISVs .......................................................................................................... 9-2
HCP Solution With HDI .......................................................................................................................... 9-3
Elastic and Back Up Free ........................................................................................................................ 9-4
Available HDI Configurations .................................................................................................................. 9-5
HDI Maps to HCP Tenants and Namespaces ............................................................................................ 9-6
Single HCP Tenant Solution for Cloud ...................................................................................................... 9-7
File System Migration Task ..................................................................................................................... 9-8
Stubs – File Restoration ......................................................................................................................... 9-9
Hitachi NAS (HNAS) Data Migration to HCP ............................................................................................ 9-10
HNAS Data Migrator to Cloud (DM2C) ................................................................................................... 9-11
HCP Solution With HCP Anywhere ......................................................................................................... 9-12
HCP Anywhere Architecture ............................................................................................................. 9-13
Hitachi Content Intelligence ............................................................................................................. 9-13
A Solution to the Data Delimma ....................................................................................................... 9-14
Content Intelligence Does Three Specific Things................................................................................ 9-15
Data Connections: Connecting the Dots ............................................................................................ 9-16
Understanding Transforming Data.................................................................................................... 9-16
Recommend Enabling Data-Driven Decisions ..................................................................................... 9-17
Access Two Easy-to-Use Interfaces .................................................................................................. 9-17
Workflow Designer Transforms and Enriches Data ............................................................................. 9-18
Hitachi Content Intelligence: Workflows............................................................................................ 9-18
Admin Interface Manages the System .............................................................................................. 9-19
Content Search Enables the End-User .............................................................................................. 9-19
Highly Scalable With Deployment Flexibility....................................................................................... 9-20
A Toolset to Enable Data-Specific User Experiences ........................................................................... 9-20
Other Solutions ................................................................................................................................... 9-20
HCP Integration With ISV Middleware .............................................................................................. 9-21

xi
Table of Contents

List of ISV Partners ......................................................................................................................... 9-21


Software Partners Complete the Solution (100+ Partners) .................................................................. 9-22
Module Summary ................................................................................................................................ 9-22
Module Review .................................................................................................................................... 9-23
Your Next Steps .................................................................................................................................. 9-23

Training Course Glossary ........................................................................................ G-1

Evaluating This Course ............................................................................................ E-1

xii
Introduction
Welcome and Introductions

 Student Introductions
• Name
• Position
• Experience
• Expectations

© Hitachi Vantara Corporation 2018. All rights reserved.

xiii
Introduction
Course Description

Course Description

This 3 day instructor-led course provides an overview of the Hitachi


Content Platform (HCP) functionality, concepts, architecture and
processes, such as data ingestion, search and replication.

You will complete numerous hands-on lab activities designed to build the
skills necessary to integrate, administer and configure the key software
products for HCP solutions.

© Hitachi Vantara Corporation 2018. All rights reserved.

Prerequisites

 Prior completion of the networking course is recommended:


• None

 Knowledge and skills


• Basic knowledge of storage systems
• Working knowledge of networking and external Domain Name Service (DNS)

© Hitachi Vantara Corporation 2018. All rights reserved.

xiv
Introduction
Course Objectives

Course Objectives

 Upon completion of this course, you should be able to:


• Describe the Hitachi Content Platform (HCP) functionality and concepts,
including the ingestion process
• Identify HCP physical and logical components and their locations
• Implement different HCP solutions
• Perform basic network configurations, administration functions, and search,
replication, and support activities

© Hitachi Vantara Corporation 2018. All rights reserved.

Course Topics

Content Modules Learning Activities – Labs


1. Overview 1. HCP Configuration and Documentation
2. Hardware Components 2. DNS Integration
3. Network Configuration 3. First Login, User Accounts, VLAN
4. Administration Management and Active Directory Setup
5. Ingestion Processes 4. SMC Storage Configuration
6. Search Activities 5. Creating Tenants, Tenant User Accounts and
7. Replication Activities Namespaces
8. Support Activities 6. Ingest, Archive and Access Objects via All
9. Solutions Ways
7. Search With Metadata Query Engine
8. Replication
9. Monitor and Logs

© Hitachi Vantara Corporation 2018. All rights reserved.

xv
Introduction
Learning Paths

Learning Paths

 Are a path to professional


certification

 Enable career advancement

 Available on:
Hitachivantara.com (for customers)
Partner Xchange (for partners)

© Hitachi Vantara Corporation 2018. All rights reserved.

Customers

Customer Learning Path (North America, Latin America, and APAC):


http://www.hitachivantara.com/assets/pdf/hitachi-data-systems-academy-customer-learning-
paths.pdf

Customer Learning Path (EMEA): http://www.hitachivantara.com/assets/pdf/hitachi-data-


systems-academy-customer-training.pdf

Partners: https://partner.hitachivantara.com/

Please contact your local training administrator if you have any questions regarding Learning
Paths or visit your applicable website.

xvi
Introduction
Resources: Product Documents

Resources: Product Documents

 Documentation that
provides detailed product
information and future
updates is available on
the Hitachi Vantara
Support Portal
https://support.hitachivantara
.com/en_us/documents.html

© Hitachi Vantara Corporation 2018. All rights reserved.

Resource Library: The site for Hitachi Vantara product documentation is accessed through:

https://support.hitachivantara.com/en_us/documents.html

xvii
Introduction
Collaborate and Share

Collaborate and Share

Hitachi Vantara Community


 Learn best practices to optimize your IT environment
 Share your expertise with colleagues facing real challenges
 Connect and collaborate with experts from peer companies
and Hitachi Vantara

© Hitachi Vantara Corporation 2018. All rights reserved.

For Customers, Partners, Employees – Hitachi Vantara Community:

https://community.hitachivantara.com/welcome

xviii
Introduction
Social Networking —Twitter Site

Social Networking —Twitter Site

 Twitter site
Site URL: http://www.twitter.com/HitachiVantara

© Hitachi Vantara Corporation 2018. All rights reserved.

Hitachi Vantara Global Learning link to Twitter:

http://www.twitter.com/HitachiVantara

xix
Introduction
Hitachi Self-Paced Learning Library

Hitachi Self-Paced Learning Library

Short videos Hands-on Guided Knowledge Traditional 1:1 Expert Community


Practice (HALO) Demonstrations Checks Classroom Instructors Collaboration
ILT and VILT

Current Libraries Available Coming Soon

Hitachi Content Solutions Hitachi Converged


Hitachi Infrastructure Solutions
Hands On
Lab Test

© Hitachi Vantara Corporation 2018. All rights reserved.

The Hitachi Self-Paced Learning Library is a subscription-based learning platform that gives you
the flexibility of accessing Hitachi Vantara training libraries for the Hitachi Vantara Solutions you
need.

Subscriptions include access to:

• Online libraries of training videos and quizzes

• Hands-on practice labs in a safe environment

• Guided demonstrations

Training is set up in a modular approach, allowing you to take an entire course or just a portion,
depending on the time available to you. You can easily access the library anywhere, anytime,
including your mobile device.

The Hitachi Self-Paced Learning Library is currently only available in the Americas.

For more information, please contact:

• https://www.hitachivantara.com/en-us/services/training-
certification/training.html#trainingDetail

• training@hitachivantara.com

xx
1. Overview
Module Objectives

 Upon completion of this module, you should be able to:


• Describe Hitachi Content Platform (HCP) functionality and concepts
• Describe HCP virtualization: tenants and namespaces
• Identify key capabilities of HCP
• Identify available HCP configurations
• Identify compliance features
• Describe purpose of all HCP consoles, policies, and services

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 1-1
Overview
HCP: Object-Based Storage

HCP: Object-Based Storage


This section covers Object-Based Storage concepts.

Hitachi Compute Platform Basics

 Deployed on commodity Linux servers (nodes)


• Networked or clustered together to form a single system

 Policies and services ensure data integrity

 Optimized for fixed-content


• Write-Once, Read-Many (WORM) storage

 Open protocols for data access


• HTTP based (Representational State Transfer (REST), S3, Swift, WebDAV),
NFS, CIFS

© Hitachi Vantara Corporation 2018. All rights reserved.

• Hitachi Content Platform (HCP) is a distributed storage system designed to support large,
growing repositories of fixed-content data. HCP stores objects that includes both data
and metadata that describes the data. It distributes these objects across the storage
space, but still presents them as files in a standard directory structure. HCP provides a
cost-effective, scalable, and easy-to-use solution to the enterprise-wide need to
maintain a repository of all types of data from simple text files and medical image files
to multi-gigabyte database images. An HCP system consists of both hardware and
software

• HCP is optimized to work best with HTTP-based APIs: REST and S3

• REST API – Representational state transfer, stateless, using simple HTTP commands
(GET/PUT/DELETE)

o It translates HTTP requests into simple commands

o It is used by HCP Anywhere, HDI, HCP data migrator, HNAS and most third party
middleware products to communicate with HCP

o Hitachi Vantara provides REST API developer’s guide – all available APIs are
open and well documented

Page 1-2
Overview
Hitachi Compute Platform Basics

• S3 API – Standard Cloud API, developed by Amazon

o S3 API works similarly to REST API

o S3 API is a standard cloud storage interaction protocol developed by Amazon

o In S3 API, it is possible to use any S3 client software – it will work with HCP out
of box

o In S3 support, it is possible to extend HCP capacity by connecting an S3


compatible storage. This can be public or private cloud storage

o HCP S10 and S30 nodes are S3 compatible storage devices

• Swift API – Open Stack Object Storage API based on REST

Comparing protocols:

• Network File System (NFS) and Common Internet File System (CIFS) are value added
protocols

o NFS cannot be authenticated with HCP

o CIFS can be authenticated only with AD

o NFS and CIFS are good for migrations and/or application access

o NFS and CIFS don’t perform as well as Hypertext Transfer Protocol (HTTP), the
World Wide Web protocol

• Use HTTP-based APIs whenever possible

• Other protocols

o WebDAV: Web-based Distributed Authoring and Versioning (HTTP extensions)

o SMTP: Simple Mail Transfer Protocol (Internet email)

Page 1-3
Overview
What Is an HCP Object?

What Is an HCP Object?

Fixed-content data
(Data)
• Once it’s in HCP, this
data cannot be modified
System metadata
• System-managed properties
describing the data
• Includes policy settings

Custom metadata
(Annotations)
• The metadata, a user
or application provides
to further describe an
object
© Hitachi Vantara Corporation 2018. All rights reserved.

• An HCP object is a means of abstracting and insulating the data and metadata from HW
(hardware) and SW (software). This allows for great robustness and easy migrations to
new hardware (HW) or software (SW)

• This object contains the actual data, system-generated metadata and custom
metadata/annotations

• This object lives independently within an HCP ecosystem

• This architecture allows for easy HW/SW upgrades and great scalability

• Object storage is a black box, users do not work with file systems, only with data
containers. Users do not know on what filesystem or volume is the particular file/object
stored. Users and admins do not work with filesystems, only with data containers

Page 1-4
Overview
Multiple Custom Metadata Injection

Multiple Custom Metadata Injection

SYSTEM/POSIX Medical (DICOM)


DATA METADATA <record>
<doctor>
filename: p12d67.jpg <name>John
size: 673124 Smith</name>
ingested: 12/1/16 </doctor>
hash: 12ABD78F0E <patient>
<name>John
Smith</name>
<age>48</age>
mage (EXIF) Billing </patient>
aken>11/17/12</taken> </record>
perture>5</aperture> <cost>$1,500</cost>
SO>400</ISO> <insurance>yes</insurance>

© Hitachi Vantara Corporation 2018. All rights reserved.

Images such as X-rays and other medical scanning pictures have no content that can be
searched other than a file name, but can have embedded metadata such as billing details,
doctor and patient information and other relevant details regarding the actual object.

These details are invaluable for searching this type of content as functional in our Hitachi
Clinical Repository solution.

An HCP object can be associated with multiple sets of custom metadata. That is why we talk
about multiple custom metadata injection.

• Custom metadata are also called annotations

• Each annotation is a separate file, typically xml or json

• Each annotation has its own URL path

Page 1-5
Overview
Internal Object Representation

Internal Object Representation

HCP

System ExternalFile
External file Region 10
Fixed-content data metadata /xray1.jpg, vol 5, size 9999,
(Data)
shred=true, ...

System Custom metadata Internal Internal


metadata (Annotations) file file

External File
© Hitachi Vantara Corporation 2018. All rights reserved.

• The Customer object is broken into 2 pieces internally:

o Metadata that goes into the database

o Customer data (and custom metadata) goes into a file on disk

• HCP is using “Regions” to distribute system metadata. By default there are eight regions
per node, meaning eight chunks of system metadata database per node

• A region stores a subset of the metadata, It is a collection of related tables stored in the
DB

• Regions are distributed across nodes, each node shares part of the load

• There are always two copies of a region:

o Authoritative and Backup

Page 1-6
Overview
How Users and Applications View Objects

How Users and Applications View Objects

 Each object and annotation within HCP has its URI path

 Each object’s system metadata has its own URI path

 HCP tenant and namespaces — REST API is used


• REST API is an HTTP(s) interface to HCP namespaces
• Software architecture for client/server communications over the web

© Hitachi Vantara Corporation 2018. All rights reserved.

• HCP supports S3 – standard cloud interface API

• HCP supports OpenStack Swift API

An object URL path syntax is:


https://namespace.tenant.hcp.domain.suffix/rest/path_in_the_namespace.

Page 1-7
Overview
Hitachi Content Platform Evolution

Hitachi Content Platform Evolution

 Hitachi Content Archive Platform (HCAP): The Archive Platform – release v2.6
and before
• Active archiving

 HCP: The Content Platform – release v3.0 and above


• Active archiving
• Service Provider (SSP)
• Comprehensive ROBO solution
• Private and hybrid cloud
• Healthcare and hospitals
• 100+ middleware applications
© Hitachi Vantara Corporation 2018. All rights reserved.

HCP can adapt the way no other content product can. It has a chance to grow in the archive
market and align to emerging markets such as the cloud. Think about active archiving. What
actually is archiving and what makes it active? Archiving means we are moving data from
expensive high performance storage, somewhere where it can be stored securely over long
periods of time. This is different from backup, where we create redundant copies. HCP has lots
of services that constantly work with data to ensure it is always healthy and securely stored.
The HCP services are what make archiving active. Old HCAP used to be a simple box with no
concept of multitenancy and even with no authentication options. The new HCP is a versatile
and flexible storage system that offers multiple deployment options. HCP is undergoing very
turbolent development – new features are added every year, these features bring significant
improvements in terms of possibilities the system can offer.

HCP always ensure backwards compatibility, meaning that even from the oldest system you can
upgrade to the newest version.

Because of this, there are some legacy features in the system, namely: default tenant, search
node references, blade chassis references, and so on.

ROBO – Remote Offices, Branch Offices – solution with HDI.

Note:HCAP is an obsolete product.

Page 1-8
Overview
Introduction to Tenants and Namespaces

Introduction to Tenants and Namespaces

 Each tenant and set of Physical HCP


namespaces is a virtual
HCP system Tenant 1 Tenant N

 Tenants: NS 1 NS 1

Tenant User Account 1

Tenant User Account N

Tenant User Account 1


Tenant User Account N
• Segregation of NS 2 NS 2
management


 Namespaces: NS N NS N
• Segregation of data

© Hitachi Vantara Corporation 2018. All rights reserved.

HCP cluster is managed by System Management Console. System Management Console is


managed by the system owner/admin. System management console has its own group of users
– system users. System users credentials cannot be used to access tenants. System users can
never get to actual data.

If you need to store data on HCP, you must create at least one tenant. The tenant will manage
its own users. Tenant users cannot use their credentials to get to System Management console.
Tenants can create as many namespaces as the system owner allows in System Management
console.

HCP limitations: 1,000 tenants and 10,000 namespaces.

HCP supports access control lists that allow users to manage permissions on the object level.

Page 1-9
Overview
Swift: Another Way to Use Your Storage Pool

Swift: Another Way to Use Your Storage Pool

REST/HTTP(S) NFS CIFS WebDAV SMTP Amazon S3 Hitachi Swift API

• Tiering service plans Swift API applications can write to and


• Geo replication read from HCP
• Global namespace No changes needed
• Search at scale
Private Cloud (on Premises) Public Cloud

Primary Economical Extended


Running Spindown S3 Compatible NFS Devices Amazon S3 Google Cloud Hitachi Cloud Microsoft Azure
internal disks or disks on arrays storage and compatible
disks on arrays

© Hitachi Vantara Corporation 2018. All rights reserved.

• Swift API Applications can read and write from HCP – No changes

• Increased utility

HCP Configurations
In this section you will learn about HCP configuration.

Page 1-10
Overview
Unified HCP G10 Platform

Unified HCP G10 Platform

 Single server platform for all HCP offerings


• Vendor: Quanta
• Model: D51B-2U
 End of Sale for previous HCP offerings:
• HCP 500, HCP 500XL 1G
• HCP 500XL 10G, HCP 300
 2U rack mount server
 Local or attached storage options
 Available as upgrade for existing HCP systems
© Hitachi Vantara Corporation 2018. All rights reserved.

• 2U server enclosure

• Redundant fans and power supplies (Left rear SATA HDD/SSD cage included - not
shown)

• LSI RAID controller and Supercap (not shown)

• Six 4TB hard disk drives

• CPU and memory:

o Two Intel E5-2620v3 CPUs

o 64GB memory (4 x 16GB DIMMs)

• G10 servers can me mixed with existing Hitachi Compute Rack (CR) 210H and CR 220S
based HCP systems

Page 1-11
Overview
HCP G10 With Local Storage

HCP G10 With Local Storage

 HCP G10 replacement for HCP 300 model (RAIN)

 Internal disks for OS and storage of metadata, data and indexes

 Six or twelve 4TB hard disk drives – RAID-6


• 14TB usable per node with 6 HDDs
• 28TB usable per node with 12 HDDs

© Hitachi Vantara Corporation 2018. All rights reserved.

• Customers who purchase a local storage HCP G10 system with 6 internal hard drives can
expand the internal capacity later by purchasing a “six-pack” upgrade. These six drives
are installed in each applicable node and a service procedure is run to add them into the
system. All RAID group creation, virtual drive creation, initialization, or formatting is
handled automatically – no manual configuration is required.

• HCP G10 is compatible with existing HCP 300 nodes and HCP S10 and S30 nodes.

• HCP G10 does not require SAN connectivity.

HCP G10: Hitachi Content Platform G10

HCP S10: Hitachi Content Platform S10

HCP S30: Hitachi Content Platform S30

Page 1-12
Overview
HCP G10 With Attached Storage

HCP G10 With Attached Storage

 HCP G10 replacement for HCP 500 and HCP 500 XL models

 Internal disks for metadata and node OS

 Data and indexes stored on externally attached storage array

 Six 4TB hard disk drives – RAID-6


• Metadata only

 Compatible with S10 and S30 nodes

© Hitachi Vantara Corporation 2018. All rights reserved.

• OS is now always stored locally on the server’s internal drives, not on the array (as it
used to be in HCP 500). No requirement to set up boot LUNs on the HBA cards for
attached storage systems. Online array migration is possible on HCP G10 nodes because
the OS is stored on the internal drives

• Compatible with existing HCP 500 nodes

o HCP 500, HCP 500XL 1G, HCP 500XL 10G

Page 1-13
Overview
HCP G10 SSD Performance Option

HCP G10 SSD Performance Option

 SSD performance option


• Superior performance at high density
• Minimizes or eliminates the impacts of:
 Very high object count
 Too many directories
 Too many objects in a directory

© Hitachi Vantara Corporation 2018. All rights reserved.

• SSDs have been proven to eliminate performance degradation related to certain high
density usage patterns like those addressed by the cloud optimized namespace.
Unlike cloud optimized which reduces performance impact, SSDs can eliminate
performance impact and return a degraded system to like new performance

• SSDs may be included in new systems or added later

• Postgres Indexes are moved from HDD to SSD on SSD equipped systems

• May improve healthy systems performance when characterized with services, results
TBD

Page 1-14
Overview
HCP S Node

HCP S Node

 Value proposition
• Address the need for commodity object storage
• Uses commodity hardware
• Value is in the S-series software
• Faster Data Rebuild times after HDD failure
• Optimized for any object size (small and large)
• Compatible with all HCP models
• Low cost self service ready
• Ethernet Attach Storage to facilitate capacity scaling

© Hitachi Vantara Corporation 2018. All rights reserved.

• The market is embracing object storage

• Vendors are commoditizing this emerging type of storage

• For HCP S10 we have chosen to use commodity hardware

• Large scale manufacturing of this hardware lowers the cost and as such brings higher value for
the dollar

• The HCP S10 value in the software

• The way the multi-patent pending software enables the hardware capabilities sets us apart from
the rest

• The software protects data faster after disk failures than traditional protection like RAID

• Our implementation of the new Erasure Code Data protection is optimized for large and small
objects

• Failed drives do not have to be replaced immediately and reduces maintenance cost

• Maintenance procedures are dead simple and do not require training. The HCP S10 is ready for
self-service

• Ready for the next generations ultra high density HDD

• Next generation Hitachi Vantara software with new patented technologies

• No immediate HDD replacement required

Page 1-15
Overview
HCP S10

HCP S10

 Economy storage option for all HCP systems


• HCP v7.2 supports direct write to S-nodes
• Single 4U tray with two controllers
• Connects through HCP front-end using Ethernet
10GbE (x2) 10GbE (x2)

Controller 1 Controller 2
Mid-plane

= Half populated
168 TB (raw)
= Full populated
336 TB (raw)

© Hitachi Vantara Corporation 2018. All rights reserved.

• HCP S10 and S30 offer better data protection than offered by Hitachi Unified Storage
(HUS) and Hitachi Virtual Storage Platform (VSP) G family (20+6 EC versus
RAID-5/RAID-6)

• HCP S10/S30 licensing costs are lower than comparable array configurations per TB

• Erasure Coding is more secure than RAID-5 or RAID-6

• RAID-5 offers protection against one disk failure

• RAID-6 offers protection against two disk failure

• EC offers protections against six disk failure

• S nodes perform better than midrange systems with RAID technology

Page 1-16
Overview
HCP S30

HCP S30

 Economy storage option for HCP

 HCP v7.2 supports direct write to S-nodes


 More cost effective than HCP S10 at 4 trays
• 2 server heads with SAS HBA
• 3 to 16 SAS-connected 4U expansion trays
• Maximum 16 trays in 2 racks per HCP S30 node
• Maximum 5.7PB with 6TB HDD
• Up to 80 HCP S30 nodes for a single HCP system
• Up to 457PB for a single HCP

© Hitachi Vantara Corporation 2018. All rights reserved.

• HCP S30 has higher storage capacity (4.3PB usable) versus HUS/VSP G family

• Raw capacity with 80 S30 nodes: 457 PB

• Usable capacity with 80 S30 nodes: 334 PB

Page 1-17
Overview
HCP S Node

HCP S Node

Software Features Built for commodity hardware (cost efficient)


20+6 erasure code (EC) data protection
Fast data rebuilds in case of HDD failure
Enhanced data durability/reliability
Ease of use with Plug & Play and automation
Storage protocol is S3
Object single instancing
Ready to be supported by other Hitachi Vantara products
Capabilities Self-checking and healing

Versioning (by HCP)

Compression (by HCP)

Encryption (by HCP)

Retention/WORM (by HCP)


© Hitachi Vantara Corporation 2018. All rights reserved.

• The software delivers high reliable and durable storage from commodity hardware
components

• Implements state-of-the-art second generation erasure code data protection technology

• Offers fast data re-protection to the largest HDD available now and in the future

• Has self-optimize features. The user does not have to be concerned with configuring,
tuning, balancing resources (HDD)

• Besides a fully capable web user interface, the HCP S10 an be entirely managed and
monitored using MAPI

• No training required to operate or perform maintenance procedures

• Communication between generic nodes and the HCP S10 nodes is S3 protocol based,
and as such ready to be supported by other Hitachi Vantara products like HNAS (august
2015)

• HCP objects stored on HCP S10 will fully support retention, WORM, versioning,
compression and encryption

Page 1-18
Overview
HCP S Series Storage Principles

HCP S Series Storage Principles

 HDD is divided into extents


 Extents can be data or parity
• Example, 2+1 extent group
 HCP S10 uses 20+6 extent
group
 Sustains 6 concurrent failures
 Storage efficiency 77%
 Data reliability 15 times

© Hitachi Vantara Corporation 2018. All rights reserved.

RAID Rebuild Principles

 After drive failure, protection of new


data is degraded or unprotected

 Rebuild can start only after a new


drive is placed/assigned (hot spare)

 All rebuild write activity goes to the


new replaced disk and performance
is affected across the entire RAID HDD#1 HDD#2 HDD#3 HDD#4
group

 Complete disks are rebuilt whether


data is present or not

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 1-19
Overview
HCP S Series Rebuild Principles

HCP S Series Rebuild Principles

 All available drives are active, no


idle hot spares
 After drive failure, new data is
written with full protection
 Rebuild write activity is distributed
across all available disks
 Only damaged extents are rebuilt,
HDD#1 HDD#2 HDD#3 HDD#4 HDD#n
not the complete disk

© Hitachi Vantara Corporation 2018. All rights reserved.

• Fixed-size extents; with small files, rebuild times do not increase and have no reduced
storage efficiency

• Faster rebuild; less vulnerability

HCP S Series Healing Properties

 Rebuilds data, not disks


 Features priority-based data
repair Repair priority for extent group

 More damaged-extent groups


are repaired first
 No waiting for full disk rebuild to
complete Extent#20 Extent#21 Extent#22 Extent#23 Extent #24

 Less vulnerability; higher


reliability

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 1-20
Overview
Direct Write to HCP S10/S30

Direct Write to HCP S10/S30

 Previously, HCP S10 was only a tiering target for HCP nodes

 Any HCP model with v7.2 software now supports direct write to HCP
S10 or HCP S30

 HCP 300 and HCP G10 with local storage


• Local storage of metadata and indexes
• HCP S10/S30 storage of data
• HCP S10/S30 requires only 1 copy of data (data protection level
[DPL] 1) – can be configured for higher DPLs if multiple HCP S10/S30 units
are available
S30
© Hitachi Vantara Corporation 2018. All rights reserved.

• HCP G10b supports 10G front-end Ethernet networking and 1G back-end Ethernet
networking

• No SAN to configure or maintain (Ethernet based) – simple configuration wizard, no


storage configuration

• No distance limitations between HCP and HCP S10/S30 (standard Ethernet)

o Bandwidth available over customer network will determine performance

• Excellent performance locally or with HCP S10/S30 versus attached storage (see
following slides)

• HCP S30 has higher storage capacity (4.3PB usable) versus HUS/VSP G family

Page 1-21
Overview
VMware Edition of HCP

VMware Edition of HCP

 HCP v6.x and up support deployments in VMware ESXi 5.5 and 6.0

 Fully supported for production environments

 Demo/evaluation deployment also supported

 Benefits:
• Easy and fast deployment
• Aligns with VMware features
• No HCP hardware is needed

© Hitachi Vantara Corporation 2018. All rights reserved.

• Open Virtualization Format (OVF) templates are part of every new HCP SW version
release

• Using OVF templates make it faster to deploy HCP in VMware as you do not have to
create VMs manualy nor do you need to install the OS

• If you wish to deploy four virtual nodes, you must deploy an OVF template 4 times

• When you have the required number of virtual nodes, you can start with HCP Application
SW install

• Hyper-V is support is planned but not yet implemented

• Current supported version of VMware ESXi are 5.5 and 6.0

• ESXi 5.0 and 5.1 are now EOL

Page 1-22
Overview
OpenStack KVM HCP-VM

OpenStack KVM HCP-VM

Feature Overview
This section provides the overview of HCP features.

Page 1-23
Overview
Nondisruptive Service

Nondisruptive Service

Self-protection Self-configuration
• Policies enforce object retention, • Simplified installation and integration by
authentication and object replication setting platform configurations through
Self-healing high-level policies
• Architecture is resilient to drive/node Self-balancing
failures with no impact to data integrity, • Adjusts load by monitoring the activity and
and little to no impact to data capacity of all nodes
accessibility/throughput
© Hitachi Vantara Corporation 2018. All rights reserved.

HCP has been designed never to lose data. In addition, high availability features are built in to
make sure the user has continuous access.

Policies enforce data preservation and retention, and the clustering software provides for
failures without impact is called self-healing, however recovery without effort is called self-
configuration.

For continuous scaling, the cluster also provides for automatic load balancing.

The software looks for low water mark thresholds, and then starts distributing data and work to
other processors and storage.

As the customer adds more processing and storage, the clustering software automatically
continues to take advantage of the additional resources.

Since the cluster is self-healing, service can be provided at a relaxed pace.

If a disk or processor fails, the platform adjusts.

When the failed resources are replaced, the platform reconfigures and rebalances.

Page 1-24
Overview
HCP Objects – Protected

HCP Objects – Protected

From bit flips From modification or deletion


The Content Verification service Retention prevents modification or
guarantees data is authentic, available deletion for compliance
and secure
OR versioning provides change tracking
If corruption is discovered, alternate and prevents accidental deletion
copies or replicas may be used for
recovery WORM regardless

From hardware failures From disaster


Self healing via Advanced replication topologies
• RAID-6 • At the namespace level
• Redundant LUN mapping • Covering objects, metadata and their
• Data protection levels policies
• Distributed services © Hitachi Vantara Corporation 2018. All rights reserved.

HCP Objects – Secured

Encryption at REST Data access accounts


Protects content from being recovered Data access is restricted to users with
from stolen media using patented permissions to read, write, delete,
Secret Sharing technology search or perform privileged operations
on data in the namespace
Secure sockets layer (SSL) Assigned at the namespace level
Secure communication for admin,
replication, search and HTTP/WebDAV Access control lists (ACLs)
data traffic Group or user permissions may be
Self-signed or CSRs granted at the object level
Per domain Metadata (XML, JSON) that is stored
Active directory (AD) with an object
System, tenant and data access
accounts may be authenticated via AD
© Hitachi Vantara Corporation 2018. All rights reserved.

Page 1-25
Overview
Protection Concepts

Protection Concepts

 HCP quorum rule

 Data protection level

 Protection sets

 Zero Copy Failover (ZCF)

 Multipaths

© Hitachi Vantara Corporation 2018. All rights reserved.

HCP quorum:

Hitachi Content Platform is a cluster in that it has the same properties of a cluster, for example,
heartbeat, voting for the quorum, but differs from the traditional term cluster. HCP handles
read/write requests dramatically different from traditional clusters. The minimum number of
nodes required to initially start the platform or keep it going (the rule = 50% + 1).

If 1 node fails in a 4-node system, the platform continues to run.

If 2 nodes fail in a 4-node system, the platform stops.

HCP is a clustered system, and will continue to run to the best of its ability in light of hardware
failures. (N/2+1) node failures is the very farthest the cluster can push things; after that the
system will no longer be able maintain quorum and will be forced into read-only mode for ALL
data. It's still partially functional.

As hardware fails, HCP will try its best to keep all its data hot and available. A lot depends on
the customer-defined data protection level (DPL). For instance, if you only have 1 copy per
namespace and both servers that manage the storage that copy lives on go down
simultaneously, you will have data unavailability (DU). The cluster as a whole may still be
running, but access to some data will be gone until one of the two are brought back online.
Customers can reduce the probability of DU in this particular case by increasing DPL at the cost
of usable storage.

Page 1-26
Overview
Protection Concepts

As nodes go down the system will strive as hard as it can to repair itself. For instance, when
one node goes down the system will try and automatically create a backup copy of metadata
somewhere else on the cluster where there is healthy, running hardware. If you have
simultaneous failures, this will limit the system’s ability to heal itself. If both nodes which host
the metadata for a group of objects go down simultaneously, you will have data unavailability
(DU) since the cluster will not have any “live” copy to create new metadata backups out of or to
promote.

HCP can and will self-heal itself to the best of its ability as hardware faults occur, but
concurrent faults will limit its ability to keep ALL data available online at ALL times. The more
nodes you have the higher the probability that the cluster can take hits and keep the entire
system and its corresponding data available.

Protection sets:

• To improve reliability in the case of multiple component failures, HCP tries to store all
the copies of an object in a single protection set.

• Each copy is stored on a logical volume associated with a different node.

• HCP creates protection sets for each possible DPL.

For example, if an HCP system has 6 nodes, it creates 3 groups of protection sets:

o 1 group of 6 protection sets with 1 node in each set (for DPL=1)

o 1 group of 3 protection sets with 2 nodes in each set (for DPL=2)

o 1 group of 2 protection sets with 3 nodes in each set (for DPL=3)

• Each namespace uses the group of protection sets that corresponds to its DPL.

Zero copy failure and multipath protection concept have HCP G10 with attached storage only.

Page 1-27
Overview
Zero Copy Failover

Zero Copy Failover

 Data LUNs of 2-node pair are cross-mapped between 2 host groups,


creating 2 logical paths from 2 nodes to the LUNs

 HCP recognizes the purpose of a volume by its H-LUN

 If one node fails, the other node in a cross-mapped pair can access the
volumes Node 1
HG = Host Group
Node 2

Port 0A Port 0A
HG 000 HG 001

DATA LUN DATA LUN DATA LUN DATA LUN


1,130.3GB 1,130.3GB 1,130.3GB 1,130.3GB

© Hitachi Vantara Corporation 2018. All rights reserved.

Zero Copy Failover (ZCF) is also known as Data Access Path (DAP).

Data Encryption

 Protects content from being recovered from


stolen media using patented secret sharing
technology
• Transparently encrypts all content, metadata
and search indexes
• User experiences a performance impact
• Implements a distributed key management
solution
 Does not impact SEC 17a-4 compliance
requirements
 Data-at-Rest means the data is written to disk
© Hitachi Vantara Corporation 2018. All rights reserved.

• Performance impact for encrypted content is expected to be 10% to 20%

• Enabled at install time only for new clusters

Page 1-28
Overview
Time Settings Compliance Mode

Time Settings Compliance Mode

 Unauthorized or accidental change of time settings can lead to


potentially dangerous situations

 Time compliance mode will not allow anybody to make any changes to
time settings in the GUI

© Hitachi Vantara Corporation 2018. All rights reserved.

• Time compliance mode was first introduced in HCP v5.0.1

o Time compliance mode can be enabled during the installation or afterwards

o Time compliance mode does not allow time changes on the system

• Two time options on HCP:

o Internal clocks

o Network Time Protocol (NTP) synchronization

If somebody with the service role accidentally or intentionally changes time, for example 10
years ahead in the future, the system will accept the settings and files with retention offset
settings shorter than ten years will no longer be protected – retention expires and disposition
service starts deleting files. This falls outside the scope of legal compliance. NTP is
recommended together with time compliance mode. Furthermore, it is recommended that
multiple NTP servers are specified during or after installation.

Page 1-29
Overview
Compliance Features

Compliance Features
This section covers compliance features.

Retention Times

Retention timeframes by industry


Life Science/Pharmaceutical
Processing food 2 years after commercial release
Manufacturing drugs 3 years after distribution
Manufacturing biologics 5 years after manufacturing of product
Healthcare HIPAA
All hospital records in original form 5 year minimum for all records
Medical records for minors From birth to 21 years
Full life patient care Length of patient’s life + 2 years
Financial services 17a-4
Financial statements 3 years
Member reg. for broker/dealers End-of-life of enterprise
Trading account records End of account + 6 years

OSHA 30 years from end of audit


Sarbanes - Oxley Original correspondence 4 years after financial audit
1 2 3 4 5 10 15 20 25 50

Source: ESG
© Hitachi Vantara Corporation 2018. All rights reserved.

While government regulations have a significant impact on content archiving and preservation
for prescribed periods, compliance does not necessarily require immutable or Write Once, Read
Many (WORM)-like media. In many cases, the need for corporate governance of business
operations and the information generated are related to the need to retain authentic records.
This requirement ensures adherence to corporate records management policies, as well as the
transparency of business activities to regulatory bodies. As this chart illustrates, the retention
periods for records are significant, from two years to near indefinite.

Page 1-30
Overview
Regulatory Compliance

Regulatory Compliance

 Retention modes for namespaces


 Enterprise - Default retention mode
 Compliance - Strict retention mode, optional

Enterprise Mode Compliance Mode


Compliance role has ability to: Compliance role has ability to:
• Create retention classes • Create retention classes
• Modify retention classes (increase and • Modify retention classes (increase
decrease duration) duration only)
• Delete retention classes
• Privileged delete (and privileged purge all
versions, these operations are always logged)

© Hitachi Vantara Corporation 2018. All rights reserved.

• Note that the Enterprise mode is always the default when you create a namespace. If
you wish to use compliance mode

• First, “Retention Mode Selection“ feature must be enabled on the tenant and then the
settings will become visible when creating/modifying a namespace

Page 1-31
Overview
Retention Mode Selection for Tenants

Retention Mode Selection for Tenants

System Management Console


Tenant Management Console

© Hitachi Vantara Corporation 2018. All rights reserved.

• To use certain features on the namespace level, these features must be first be enabled
for the tenant. Once you allow the tenant to use a feature, you cannot remove this
permission. The tenant can then use these features freely

• System administrator can enable retention mode selection for a tenant

• If feature is not enabled for a tenant, all its namespaces will be created in Enterprise
mode

Page 1-32
Overview
Change Retention Mode for Namespace

Change Retention Mode for Namespace

 It is possible to promote from Enterprise to Compliance retention mode

© Hitachi Vantara Corporation 2018. All rights reserved.

• Can promote from Enterprise to Compliance retention mode

• Tenant administrator cannot demote from Compliance to Enterprise mode

• Once you switch to Compliance mode, there is no going back to Enterprise mode.
Always consider switching to compliance mode carefully as there is no service procedure
that can remove WORM data stored in a namespace with compliance mode enabled

Page 1-33
Overview
Reviewing Retention

Reviewing Retention

 Each object has a retention policy which:


• Determines how long an object must be kept
• Influences transactions and services on the object
• Has retention methods as:
 Special value
o Deletion allowed (0)
o Deletion prohibited (-1), or
o Initially unspecified (-2)
 Offset: years, months, days
 Fixed date
• Retention class: Named retention setting
 Retention hold
© Hitachi Vantara Corporation 2018. All rights reserved.

Retention hold: A condition that prevents an object from being deleted by any means or
having its metadata modified, regardless of its retention setting, until it is explicitly released.

Page 1-34
Overview
Default Retention Setting

Default Retention Setting

© Hitachi Vantara Corporation 2018. All rights reserved.

Note: If you change the default retention setting, after a while the new setting will not
automatically propagate to objects that were stored before. Retention setting is part of an
object metadata. If you wish to change the retention setting (for example from “initially
unspecified“ to “offset“) you need to use HCP Tools script to modify all existing metadata of
objects. This can be performance intensive. The design is of utmost performance. Before you
start using the HCP system, you should have a clear idea of what kind of data you want or need
to store with what retention settings.

Page 1-35
Overview
Privileged Delete or Purge

Privileged Delete or Purge

 Privileged Delete allows the ability to perform an audited delete, even if the object
is under retention
 Privileged Purge allows a compliance user to delete all versions of an object
 Privileged Delete or Purge is not allowed for objects under retention hold
 Privileged Deletes are logged

© Hitachi Vantara Corporation 2018. All rights reserved.

• If you have the compliance role on your account, you can perform a privileged delete
also using other gateways to data – for example Common Internet file system (CIFS),
http (data migrator/curl)

• Privileged deletes will always be logged

Policies and Services


In this section, policies and services are discussed.

Page 1-36
Overview
Policies and Services

 Policies
• Settings that influence transactions and services on objects
• Set at the object or namespace levels
• DPL, indexing, retention, shredding and versioning

 Services
• Background processes that iterate over objects
• Services run according to service schedule
• Enable or disable, start or stop at the system level

© Hitachi Vantara Corporation 2018. All rights reserved.

 DPL: Since HCP v7, DPL is configured and managed as a service plan
• Indexing:

• If you wish to use Metadata Query Engine (MQE), the built-in search console, you need
to enable indexing on the namespace you want to search through

o In case you plan to use HDDS, you don‘t have to use indexing, HDDS does that
on its own

• With indexing, you need to decide where to put the index database; you have 3 options:

o Shared volume – default option. One of the data LUs on each node will became a
shared volume; it will hold both user data and the index database

o IDX-only LU – you can dedicate volumes that will be used to hold the index
database; you need to use specific Host Logical Unit Number (H-LUN) numbers
for mapping and cross-mapping

o HCP500XL – the index database is stored on internal disks; this is the best option
if you plan to use MQE intensively or if you share the back-end storage system
with other applications/Hitachi Vantara products

Note: These options are available only in case of HCP 500; in HCP 300, the only place where
you can store the index database is on a shared volume.

Page 1-37
Overview
Policies and Services

HCP services are responsible for optimizing the use of system resources and maintaining the
integrity and availability of the stored data

• Services are responsible for enforcing policies

o Services run according to a specified schedule (for example, daily or weekly) or


in response to specific events

o Monitored at the system level with monitor or administrator role

o Enable/Disable at system level with Service role

o Start/Stop at the system level with Service role

• A service is a background process that performs a specific function that contributes to


the continuous tuning of the HCP system

• Services work on the repository as a whole, that is, they work across all namespaces

• The number of regions per node can actually be different

• For example on a 4 node HCP the default region count is 32 which is 8 regions per node.
It takes 8 runs for a service to process the entire repository

Page 1-38
Overview
Services

Services

 You can disable or start specific services using the Services panel of
the System Management Console, Overview page if you have the
Service role
Disable a service
Start a service

© Hitachi Vantara Corporation 2018. All rights reserved.

Default Service Schedule

© Hitachi Vantara Corporation 2018. All rights reserved.

Notice that you cannot simply modify the schedule. If you want to modify, you need to create a
new schedule.

Page 1-39
Overview
Service Descriptions

Service Descriptions

Capacity balancing Content verification


 Ensures distribution of available Object  Ensures integrity of object by
storage remains roughly equivalent checking cryptographic hash value;
across all storage nodes repairs object if hash does not match
Compression  User selectable hash algorithms
 Compresses object data, freeing include SHA-1, 256, 384 or 512; MD5
space for additional storage and RIPEMD-160
Garbage collection Disposition
 Deletes data and metadata left by  Deletes expired objects, service
incomplete operations disabled by default – use caution if
 Reclaims storage for deleted considering enabling on an existing
objects HCP with data
Shredding Duplicate elimination
 Shreds deleted objects marked for  Find and inspect duplicates
shredding  Remove duplicates but maintains
integrity
© Hitachi Vantara Corporation 2018. All rights reserved.

A customer wrote all his data with a 1 minute retention period and then, out of curiosity,
enabled the disposition service. He was a little upset when all his data was removed overnight!

Protection Scavenging
 Maintains set level of data  Ensures objects have valid
redundancy, as specified by DPL metadata by detecting and
for each namespace repairing violations
 Can be set to maintain 1 to 4
internal copies depending on
value of data
Replication
Indexing  Creates copies of objects to
 Prepares objects to be indexed another system for recovery
and found by specified criteria
through the Search console
Replication Verification Service
 Continually processes new
objects and metadata changes  Replication Verification Service
(RVS), checks that objects are
being properly replicated as
specified in the service plan.
© Hitachi Vantara Corporation 2018. All rights reserved.

https://hcpanywhere.hds.com/a/Lo_5_FMP-
xI8j_n3/Global%20Data%20Protection%20Audit.pptx?

Page 1-40
Overview
Geographically Distributed Data Protection

Geographically Distributed Data Protection

 Geo-protection is implemented by the HCP replication service

 The replication service is responsible for keeping tenants and


namespaces on two or more HCP systems in sync with each other

 Geo-protection offers several benefits:


• If a system in a replication topology becomes unavailable, a remote system
can provide continued data availability
• If a system in a replication topology suffers irreparable damage, a remote
system can serve as a source for disaster recovery
• If multiple HCP systems are widely separated geographically, each system
may be able to provide faster data access for some applications than the
other systems can
© Hitachi Vantara Corporation 2018. All rights reserved.

Page 1-41
Overview
Policy Descriptions

Policy Descriptions

Retention Shredding
May
May  Prevents file deletion before retention  Ensures no trace of file is
21
21
2036 period expires recoverable from disk after
 Can be set explicitly or inherited deletion
Versioning
 Deferred retention option
 New object version is created
 Can set a Retention Hold on any file
when data changes
Indexing  Write Seldom Read Many
(WSRM)
 Determines whether an object will be
indexed for search Service plans
 Tiering policies
Custom metadata XML checking  Tier to spindown (HUS only)
 Determines whether HCP allows  Tier to cloud services
custom metadata to be added to a  Tier to HCP S10 and HCP S30
namespace if it is not well-formed  Tier to NFS
XML  Tier to replica (Metadata Only)
© Hitachi Vantara Corporation 2018. All rights reserved.

• To set a replication policy, we need to set retention mode and retention method.
Retention settings apply to new objects. To change retention settings for existing
objects, it is neccessary to overwrite its system metadata

• Indexing is on/off settings. If you want to make a namespace searchable by MQE,


enable indexing

• Custom Metadata XML Checking is turned off by default. With large custom metadata,
this may slow down the system

• Shredding is on/off setting. Deleted data are securely shredded, when used

• Versioning is on/off feature. You can configure automated version pruning – automated
deletion of old versions. Versioning cannot work when CIFS/Network File System (NFS)
access to a namespace is enabled

Page 1-42
Overview
Module Summary

Module Summary

 In this module, you should have learned to:


• Describe the Hitachi Content Platform (HCP) functionality and concepts
• Describe HCP virtualization: tenants and namespaces
• Identify key capabilities of HCP
• Identify available HCP configurations
• Identify the compliance features
• Describe the purpose of all HCP consoles, policies, and services

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 1-43
Overview
Module Review

Module Review

1. What kind of filesystem is used on HCP?

2. How many tenants and namespaces are supported on 8-node HCP?

3. How many HCP product configurations there are?

4. Is it possible to use HCP G10 nodes to upgrade existing systems?

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 1-44
2. Hardware Components
Module Objectives

 Upon completion of this module, you should be able to:


• Identify key hardware components of Hitachi Content Platform (HCP) G10
• Identify available hardware options
• Identify hardware components of Hitachi Content Platform (HCP) S10
• Identify hardware components of Hitachi Content Platform (HCP) S30

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 2-1
Hardware Components
HCP Components

HCP Components
This section describes the hardware components of HCP G10, HCP S10 and HCP S30.

HCP G10 Common Hardware

 2U server enclosure

 Redundant fans and power supplies (FRU)


(Left rear SATA HDD/SSD cage included – not shown)

 LSI RAID controller and SuperCap (not shown; FRU)

 Six 4TB hard disk drives (FRU)

 CPU and memory


• Two Intel E5-2620v3 CPUs
• 64GB memory (4 x 16GB DIMMs, FRU)

© Hitachi Vantara Corporation 2018. All rights reserved.

FRUs: Fans, PSU, RAID, Supercap, NIC, HDDs, SSDs, DIMMs

Page 2-2
Hardware Components
HCP G10 Optional/Future Hardware

HCP G10 Optional/Future Hardware

 Local storage six 4TB drive upgrade

 Additional memory (pairs)


• Up to 256GB (16 x 16GB DIMMs)

 Two 400GB or 800GB SSDs

 Ethernet networking options

 Future 1G management and service ports (software enabled)

© Hitachi Vantara Corporation 2018. All rights reserved.

• 4TB HDDs are used

• SSDs used for internal databases/metadata

• HW management and service ports

Note: Ethernet options discussed on the next slide

Page 2-3
Hardware Components
HCP G10 Ethernet Networking Options

HCP G10 Ethernet Networking Options

 All HCP G10 nodes (local or attached storage) can support 1G and 10G
networking with the following options:
FRONT-END BACK-END
Description Speed Port Speed Port
Type Type
2x10G motherboard, one 2x10G PCIe 1GbE/ BASE-T 1GbE BASE-T
10GbE
2x10G motherboard, one 2x10G PCIe 10GbE SFP+ 1GbE BASE-T
2x10G motherboard, one 2x10G PCIe 10GbE BASE-T 10GbE SFP+
Two 2x10G PCIe (motherboard unused) 10GbE SFP+ 10GbE SFP+

© Hitachi Vantara Corporation 2018. All rights reserved.

10GbE front-end with 1GbE back-end is optimized for HCP S node integration. HCP S nodes
support only 10GbE interface.

Page 2-4
Hardware Components
HCP G10 1/10Gb BASE-T FE/1G BASE-T BE

HCP G10 1/10Gb BASE-T FE/1G BASE-T BE

 Bonding will take place across the motherboard and PCIe card
slots/ports as shown

SEC SEC
PRI PRI

© Hitachi Vantara Corporation 2018. All rights reserved.

HCP G10 10Gb SFP+ FE/1G BASE-T BE

 Bonding will take place across the motherboard and PCIe card
slots/ports as shown

PRI SEC
SEC PRI

© Hitachi Vantara Corporation 2018. All rights reserved.

• RED = FE = Front-end
• BLUE = BE = Back-end
• PRI = Primary connection
• SEC = Secondary connection

Page 2-5
Hardware Components
HCP G10 10Gb BASE-T FE/10G SFP+ BE

HCP G10 10Gb BASE-T FE/10G SFP+ BE

 Bonding will take place across the motherboard and PCIe card
slots/ports as shown

PRI SEC
SEC PRI

© Hitachi Vantara Corporation 2018. All rights reserved.

HCP G10 10Gb SFP+ FE/10G SFP+ BE

 Bonding will take place across the motherboard and PCIe card
slots/ports as shown

SEC SEC

PRI PRI

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 2-6
Hardware Components
HCP G10 10Gb SFP+ FE/10G SFP+ BE

 For attached storage configurations, the Fibre Channel PCIe card is


installed in the same position for any of the Ethernet networking options

SEC SEC
SEC PRI
PRI PRI

© Hitachi Vantara Corporation 2018. All rights reserved.

• RED = FE = Front-end

• BLUE = BE = Back-end

• YELLOW = FC = Fibre Channel

• PRI = Primary connection

• SEC = Secondary Connection

Page 2-7
Hardware Components
Back-End Ethernet Switches

Back-End Ethernet Switches

 There is one new back-end Ethernet switch option available with HCP
G10

 Available options are:


Description Rack U Speed HCP Node Count Port Type
Brocade ICX6430 1U 1GbE <=22 BASE-T
Brocade VDX6740 1U 10GbE <=44 SFP+
Hewlett Packard 4208VL 5U 1GbE <=80 BASE-T
Cisco Nexus 5548 1U 10GbE <=44 SFP+
Cisco Nexus 5596 2U 10GbE <=80 SFP+

© Hitachi Vantara Corporation 2018. All rights reserved.

• One pair of back-end switches is required per one HCP systems

• Multiple HCP systems cannot share the same back-end network

• Virtual back-end network support is planned but not yet offered

Page 2-8
Hardware Components
Fibre Channel Networking

Fibre Channel Networking

 Cisco 9148S (16Gb/sec) replaces Cisco 9148 (8Gb/sec)

 Available options are:


Description Rack U Speed HCP Node Count Port Type
Brocade 6510 1U 16Gb/sec 16 nodes/rack SFP
Cisco 9148S 1U 16Gb/sec 16 nodes/rack SFP

Note: HCP G10 nodes use 8Gb/sec Fibre Channel PCIe cards,
so effective speed per port is 8Gb/sec, not 16Gb/sec

© Hitachi Vantara Corporation 2018. All rights reserved.

Customer supplied switches can be used if they are approved for usage with HCP.

Metadata Indexes on SSDs (Optional)

 The customer can order their HCP G10 nodes with a pair of 400GB or
800GB SSDs

 The metadata database indexes will be kept on the SSDs, yielding


higher performance and scalability

 Two SSDs for redundancy (RAID-1)

 400GB (temporarily unavailable)

 800GB for large object counts

© Hitachi Vantara Corporation 2018. All rights reserved.

• Optional 2.5” SSD drives are located at the rear of the server

• There is a new service procedure for replacement of a failed SSD drive

Page 2-9
Hardware Components
Racked and Rackless

Racked and Rackless

 HCP G10 systems (Local and Attached) can be ordered with or without
racks

 Rackless systems will arrive at the customer site fully configured, but
will need to be racked in the customer supplied equipment

 Specifications for customer supplied racks are included in Center Take


Off (CTO) and onsite setup documentation
© Hitachi Vantara Corporation 2018. All rights reserved.

• In past all HCP systems were shipped together with rack by default

• HCP power and weight calculator http://www.hds.com/go/weight-and-power-calculator/?

Page 2-10
Hardware Components
HCP S10 Node

HCP S10 Node

 Hardware features
• 4U enclosure for 60 HDD and with 2
Intel-based servers with mirrored SSD for the
software
• High availability design
• Hot swappable HDD, servers and power
supply

© Hitachi Vantara Corporation 2018. All rights reserved.

• S10 nodes are connected to standard HCP nodes over the front-end (virtual) networks

• Four 10GbE network interfaces (SFP+)

o Optional adapters for 1GbE RJ45 are available

• Minimum of 2 switch ports required for connectivity

• Two separate 1GbE management ports are available

• The commodity hardware base is a 4U enclosure that houses up to 60 3.5” HDD and two Intel
based servers with SSD for the software

• Each server has one CPU with six cores and 32GB memory

• HCP S10 has a high availability design

• The major components can be replaced when the unit is under power and in production use

• Generic HCP nodes and HCP S series nodes communicate with each other over the front-end
network, on a secure separate VLAN if so desired

• Each server in the HCP S10 node has two SFP+ 10GbE network ports and one RJ45 1GbE
management port

• The minimum requirement for the connectivity is:

o Each server must have one SFP+ 10GbE connected to the (front-end) network switch

Page 2-11
Hardware Components
HCP S30 Node – Server Module

HCP S30 Node – Server Module

 The HCP S30 will use the 2U1N Hitachi Vantara server for the server
heads

 This is the same server as HCP G10

 Front view:

© Hitachi Vantara Corporation 2018. All rights reserved.

• Two servers + attached enclosures = one HCP S30 logical node

• We support up to 80 HCP S30 nodes

• That is 160 server modules and 160 racks with approximately 460PB capacity

Page 2-12
Hardware Components
HCP S30 Node – Server Module

 Rear view, 1GbE connections + SAS HBA:


1 2 3 4

5 6 7 8

Management Service

1 2 3 4

5 6 7 8
Server
BMC BMC
Interconnect
Interconnect
© Hitachi Vantara Corporation 2018. All rights reserved.

Back-end networking:

• Ports 1-8 are SAS ports used to connect drive enclosures/trays

• Server Interconnect is used for hearbeat

• Management port is connected to HCP management network

• Service port is used for engineer access with laptop and LAN cable

Page 2-13
Hardware Components
HCP S30 Node – Enclosure Unit

 Rear view, 10GbE front-end connections:

1 2 3 4

5 6 7 8

Optional dual-port PCIe SFP+ Default 10G Base-T (on board)

© Hitachi Vantara Corporation 2018. All rights reserved.

Front-end networking

HCP S30 Node – Enclosure Unit

 Rails extend to allow the


disks to be serviced from
the front of the rack while
the HCP S30 node is
running

 Disks are labeled with


SSD/SATA, capacity and
are color coded

 LEDs on front panel show


system status

© Hitachi Vantara Corporation 2018. All rights reserved.

Fully populated tray is very heavy – approximately 110 kg. Two persons are required to perform
assembly.

Page 2-14
Hardware Components
HCP S30 Node – Enclosure Unit

60 HDD Bay (Not all HDDs


shown)

Lock
Screws

Baseboard
PCB
Power &
Cooling Power Distribution Board
Modules Rail-kit
12G SAS I/O Alignment
Modules
© Hitachi Vantara Corporation 2018. All rights reserved.

 The first 4 trays (1-4) are directly connected from both server modules
using 12Gb/sec mini-SAS HD cables

 The second 4 trays (5-8) are chained from trays 1-4 using 12Gb/sec
mini-SAS HD cables

 The third 4 trays (9-12) are directly connected from both server modules
using 12Gbps mini-SAS HD cables

 The fourth 4 trays (13-16) are chained from trays 9-12 using 12Gb/sec
mini-SAS HD cables

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 2-15
Hardware Components
Module Summary

Module Summary

 In this module, you should have learned to:


• Locate and obtain all Hitachi Content Platform (HCP) documentation
• Identify the hardware components of a Hitachi Content Platform
• Identify field replaceable units (FRUs)

© Hitachi Vantara Corporation 2018. All rights reserved.

Module Review

1. What is the purpose of optional SSDs and where are they located?

2. What network connectors are used in HCP systems?

3. What is the speed of HBA in SAN-attached configurations?

4. What network connection is used to connect S nodes?

5. What is Supercap?

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 2-16
3. Network Configuration
Module Objectives

 Upon completion of this module, you should be able to:


• Identify network interfaces

• Integrate Hitachi Content Platform (HCP) with Domain Name System (DNS)

• Configure virtual networks

• Understand concepts of IPv6 and link aggregation

• Integrate Content Platform with Active Directory (AD)

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 3-1
Network Configuration
Network Interfaces

Network Interfaces
In this section, you will identify network interfaces.

Networking

 A platform has 2 networks surrounding the nodes: 1 customer facing named


system (front-end) and 1 for inter-node communications traffic named back-end
• System (front-end or Bond 0) public network for customer/application interaction with
the platform
 2 connections: 1 primary, 1 secondary

• Back-end private network (Bond 1) for inter-node communication and coordination


(heartbeat, file copy, and others)
 2 connections: 1 primary, 1 secondary

 Options for Eth ports include: 1Gb/sec or 10Gb/sec, Base-T or SFP+

 HCP supports virtual networking with up to 200 VLANs

© Hitachi Vantara Corporation 2018. All rights reserved.

Use colour coded LAN cables for HCP network integration. An HCP system is shipped with red
and blue cables that are to support the back-end network. For the front end network, obtain
yellow and green cables yourself.

• HCP supports link agregation, connections are then active-active

• HCP supports IPv4, IPv6 and dual mode

• HCP G10 server support BMC port

• HCP G10 servers supports separate management port

• Presently, front-end network is used for all data traffic as well as for replication and
management access. VLANs can be used to separate management, data and
replication traffic

• HCP supports virtual networking with up to 200 VLANs

• VLAN setup and advanced networking are discussed in detail in the implementation
course

Page 3-2
Network Configuration
LAN Connections Review

LAN Connections Review

 Front-end, or customer-facing LAN (external)


• Customer supplied addresses and called Bond 0
address
 Primary = Green (eth0) recommended
 Secondary = Yellow (eth2) recommended

 Back-end, or Private LAN (internal)


• Created at cluster-build time and called Bond 1
address
 Primary = Blue (eth1)
 Secondary = Red (eth3)

© Hitachi Vantara Corporation 2018. All rights reserved.

For port layout, please refer to server documentation.

Interfaces are always bonded – that means you always have two IP address per node (back-
end, front-end) even though there are four physical ports.

HCP Connectivity: LAN and Fibre Channel

Public LAN
System LAN Primary
connection
Node 1 Node 2 Node 3 Node 4
Secondary
connection

Primary Paths Alternate Paths


16Gb Fibre
Channel
Switches
Note: To handle the 4 nodes,
each storage port has 2 0A 0B 1A 1B Example of a modular storage
host groups. system with LUs provisioned
RG 00 RG 01 RK from individual RAID Groups.
LUN 1 LUN 2 The LUs could be provisioned
RG 02 RG 03 RKA from HDP pool(s).
Data
RG size ÷ 2 © Hitachi Vantara Corporation 2018. All rights reserved.

Page 3-3
Network Configuration
DNS Configuration

DNS Configuration
This section covers information on DNS configuration.

DNS Service

 Domain Name System (DNS) is a network service that translates, or


resolves domain names (for example, example.com) into IP addresses
for client access
 An HCP system exists as a subdomain of a corporate domain
 The HCP DNS manager, which runs on all storage nodes, is responsible
for distributing client requests among the nodes in the system
 Configure the HCP subdomain for DNS to enable access to HCP by its
system name

© Hitachi Vantara Corporation 2018. All rights reserved.

• DNS is a network service that translates, or resolves domain names (for example,
example.com) into IP addresses for client access. The service is provided by one or
more servers, called name servers, that share responsibility for resolving client requests.
The domain names resolved by DNS are divided into zones, where each zone is defined
by set of related hostnames

• An HCP system exists as a subdomain of a corporate domain. Typically, you configure


the HCP subdomain as a secondary zone. All the HCP nodes belong to this single
subdomain and can, therefore, appear as a single entity to client applications. The HCP
DNS manager, which runs on all storage nodes, is responsible for distributing client
requests among the nodes in the system

• Configure the HCP subdomain for DNS to enable access to HCP by its system name

• HCP automatically generates name server records for all storage nodes in the system.
Each storage node stores a copy of these records

• Before HCP can accept client requests, we need to register all of these HCP storage
nodes as master name servers for the HCP secondary zone

Page 3-4
Network Configuration
Name Resolution

Name Resolution

 DNS is able to handle 1:N relations


admin.hcp.dom.com ?

 A DNS query for an FQDN will be


answered with all available IP hcp.dom.com
addresses 192.168.0
.10
.11
 It is up to the application to pick one .12
.13
 A proper designed application will
make use of all available nodes

 HCP and DNS can communicate to DNS


keep DNS’s information actual
admin.hcp.dom.com =192.168.0.10, .11, .12, .13
© Hitachi Vantara Corporation 2018. All rights reserved.

• Network entities are addressed by their Full Qualified Domain Names (FQDNs), while
network communication is based on IP addresses

• A mapping service is required namely Domain Name Service (DNS)

Using DNS is essential, as it helps

• to optimally use HCP resources

• with HCP fail-over / fail-back, if configured correctly

But:

• It depends on the app to make proper use of what DNS tells

• A brief service interruption is to be expected even with seamless App.Failover

Page 3-5
Network Configuration
Name Resolution – Best Practice

Name Resolution – Best Practice

 In corp. DNS, configure a secondary zone per domain created in HCP

 In HCP, enable notify and configure the corp.

 If HCP is replicated:
• Configure HCP to replicate its domains and certificates
• Add the remote HCPs IP addresses to the end of the list of masters for each
zone configured in corp. DNS

 This allows DNS to pick up notification of HCP being failed-over

© Hitachi Vantara Corporation 2018. All rights reserved.

• DNS servers as Downstream DNS Servers

• Allows HCP to actively inform DNS about changes it undergoes

Page 3-6
Network Configuration
Shadow Master Functionality

Shadow Master Functionality

Page 3-7
Network Configuration
DNS Notify

DNS Notify

 DNS updates are no longer passive

 DNS updates and Service-Oriented


Architecture (SOA) expirations are
now customizable to take place,
regardless of cluster state, every N
minutes

 Benefit to the user: secondary zone


deployments can now utilize a DNS
refresh for failover updates and
other tasks

© Hitachi Vantara Corporation 2018. All rights reserved.

Downstream DNS configuration for networks

• Introduced in HCP v6.1

• Introduced primarily to support GE use case of DNS Shadowmaster

• Downstream DNS settings configured are per network

• Prerequisites:

o DNS needs to be enabled for HCP system via the SMC > Configuration > DNS
Settings page

Upgrades to HCP v7.0

• Previous configuration settings (if any) are preserved

New HCP v7.0 installs

• As with HCP v6.1, option available for customer to configure and use

Page 3-8
Network Configuration
VLAN Configuration

VLAN Configuration
This section covers information on VLAN configuration.

Virtual LANs (VLANs)

Virtual Local Area Network:

 A concept of partitioning a
physical network so that distinct
broadcast domains are created

 VLAN membership is software


configurable, reducing the need
for physical re-locations vlan10 vlan20 vlan30

 VLANs allow for traffic


segregation

© Hitachi Vantara Corporation 2018. All rights reserved.

HCP Integration With VLANs

HCP allows to integrate into up to 200 VLANs where each one maps to a Network in HCP terms.

Page 3-9
Network Configuration
Network Segregation

Network Segregation

• VLANs are created always on front-end physical network

• No VLANs can be created on back-end network

SMC Advanced Settings

 SMC > Configuration > Networks >


Advanced Settings

 Enabled IP modes displayed on this


page will match IP modes selected
during installation

 Disable IPv4 when system is ready


and to be converted to IPv6 only

 Enable IPv6 here for a dual stack


system if originally IPv4 only

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 3-10
Network Configuration
SMC Network Configuration

SMC Network Configuration

System Management Console > Configuration > Networks

 Updated Networks pages under the  With virtual network management


primary configuration menu feature enabled, users will be able to:
 Side menus • Create network
• Network view • Create network alias
• Node view
• All zone definitions
• Advanced settings
© Hitachi Vantara Corporation 2018. All rights reserved.

SMC: System Management Console

Create Network – Step 1 : Settings

SMC > Configuration > Network View > Create Network


Create Network wizard
Three steps:
1. Settings
2. IP configuration
3. Review
Step 1 – Settings:
 Network name
 Description (optional)
 VLAN ID
 MTU
 Domain © Hitachi Vantara Corporation 2018. All rights reserved.

Page 3-11
Network Configuration
Create Network – Step 2 : IP Configuration

Create Network – Step 2 : IP Configuration

 IP Mode SMC > Configuration > Networks > Create Network

 IPv4 Configuration

 IPv6 Configuration

© Hitachi Vantara Corporation 2018. All rights reserved.

IP Mode

• If HCP system is enabled for Dual Stack mode, each network may be configured for Dual
Stack, IPv4 only, or IPv6 only

• [hcp_system] network must be configured with IPv4 and IPv6 settings as required by
virtual networks

IPv4 Configuration

• Section visible if IPv4 mode is selected

• Gateway

• Netmask

IPv6 Configuration

• Section visible if IPv6 mode is selected

• Gateway & Prefix Length for IPv6 address (primary & required)

• Gateway & Prefix Length for IPv6 secondary address (optional)

Page 3-12
Network Configuration
Create Network – Step 3 : Review

Create Network – Step 3 : Review

1. Review Settings. SMC > Configuration > Networks > Create Network

2. Review IP Configurations.

3. Use Previous button to


navigate back to make any
change.

4. Click on Finish to create


network.

5. Next step – add node IP


addresses.
© Hitachi Vantara Corporation 2018. All rights reserved.

Add Node IP Addresses

 HCP will navigate user to the IP


Configuration tab of the newly
created network

 Enter IP address for each of the


nodes on the system

 HCP calculates IPv6 addresses


when the user selects the
Calculate Primary or Calculate
Secondary button

© Hitachi Vantara Corporation 2018. All rights reserved.

Note: Network has no node IP address error will be displayed until node IPs are properly
configured.

Page 3-13
Network Configuration
SMC Network View

SMC Network View

SMC > Configuration > Networks > Network View

 Ability to find a network by Name or IP Mode

 Ability to page through lists of networks

 Table displays overview information for each network


• Name
• IP Mode
• Subnets
• Domain

© Hitachi Vantara Corporation 2018. All rights reserved.

SMC > Configuration > Networks > Network View tabs

As a convenience, HCP
With an HCP system in v7.0 provides the ability
dual stack mode, each to auto-calculate IPv6
network can be addresses since they
configured for IPv4 only, can be cumbersome to
IPv6 only, or dual stack enter manually
© Hitachi Vantara Corporation 2018. All rights reserved.

Page 3-14
Network Configuration
SMC Node View

SMC > Configuration > Networks > Network View > Settings >
Downstream DNS Configuration

© Hitachi Vantara Corporation 2018. All rights reserved.

SMC Node View

SMC > Configuration > Networks > Node View

 Ability to find a network by Node ID or status

 Ability to page through lists of networks

 Table displays overview information for each network


• Node ID
• Status
• Back-end IP Address

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 3-15
Network Configuration
Link Aggregation and IPv6 Support

Link Aggregation and IPv6 Support

Communication Type [hcp_system] [hcp_backend] VNeM


System Management ✓ ✓
Console
Tenant Management ✓ ✓
Multicast ✓
Communication
Cluster Health ✓ ✓
Data Access ✓ ✓
MAPI ✓ ✓
Replication ✓ ✓
NTP ✓ ✓
SNMP ✓
DNS ✓ ✓
© Hitachi Vantara Corporation 2018. All rights reserved.

In case a VLAN is created, some of the functions of hcp_system network can be serviced
through the VLAN.

Link Aggregation
You will get information on link aggregation and IPv6 support.

Page 3-16
Network Configuration
Link Aggregation

 HCP currently provides active-passive bonding for the front end


interface

 The active-active link aggregation requires a single front end switch for
both ports

 The customer’s switch must also support 802.3ad to take advantage of


the active-active bonding

 Writes need to come from multiple clients to gain any benefit

© Hitachi Vantara Corporation 2018. All rights reserved.

• HCP currently provides active-passive bonding for the front end interface. This means
that HCP can take advantage of only single 1GB E network port performance. This
feature allows the customer to configure the front end interface for active-active
bonding using 802.3ad

• This setting affects all the nodes in the system and cannot be done on a node-by-node
basis

• The active-active link aggregation requires a single front end switch for both ports

o This will reduce some of the high-availability capability since a single switch
failure results in loss of connectivity

• The customer’s switch must also support 802.3ad to take advantage of the active-active
bonding. However, the active-active bonding provides failover capability if a single link is
lost

• Writes need to come from multiple clients to gain any benefit

Page 3-17
Network Configuration
IPv4 Running Out Of Room

IPv4 Running Out Of Room

Welcome to Network Setup


 Network setup is available
Enter the front-end IP address []: x.x.x.x
Enter the front-end netmask [255.255.255.0]: y.y.y.y • During OS installation
Enter the front-end gateway IP address [x.x.x.1]: z.z.z.z
Enter the front-end bonding mode [active-backup]: [active- • After the system has been
backup|802.3ad]
Is the front-end network a VLAN? [No]: Yes installed
Enter the front-end VLAN ID [2]: ##

Enter the back-end IP address []: b.b.b.b

You have entered the following network configuration:


Front-end IP address: x.x.x.x
Front-end netmask: y.y.y.y
Front-end gateway IP address: z.z.z.z
Front-end bond mode: [active-backup|802.3ad]
Front-end VLAN ID: ##

Back-end IP address: b.b.b.b

Is this correct (yN): y


© Hitachi Vantara Corporation 2018. All rights reserved.

IPv6 Support for HCP

▪ IPv4 protocols allocate 32 bits ▪ IPv6 brings in 128 bits


▪ Number of devices in the world far ▪ Satisfy 2128 addresses allowing
exceeded IPs addresses 7.9×1028 times more addresses
accommodated by IPv4 Including built-in features like
security and protocol efficiency
Note: Images from: www.worldipv6launch.org © Hitachi Vantara Corporation 2018. All rights reserved.

Page 3-18
Network Configuration
Authentication With AD

Authentication With AD

Dual IPv4 and IPv6 Leveraging existing


support on the front-end functionality in IPv6
network

Support for standard Seamlessly integrate with


networking and access existing data center
protocols infrastructure

© Hitachi Vantara Corporation 2018. All rights reserved.

Dual IPv4 and IPv6 support on the front-end network

• Support for dual stack (IPv4 or IPv6), native IPv4, and native IPv6 operations

• Transition to IPv6 with dual stack support

• HCP supports all applications during the migration, regardless of which IP version they
support

Support for standard networking and access protocols

• Neighbor Discovery Protocol

• Internet Control Message Protocol v6 (ICMPv6), ping6, traceroute6

• Host name and address resolution with DNS over IPv6

• SNMP, access protocols (CIFS, HTTP, NFS, SMTP), and secure HTTPS access over IPv6

• SSH

Leveraging existing functionality in IPv6

• IPv6 increases IP address size from 32 bits to 128 bits, providing 340 undecillion
(approximately 3.4 x 1038) addresses

Page 3-19
Network Configuration
Authentication With AD

• Better built in security – authentication, encryption, and protection at the network layer

• True end-to-end connectivity – no need for network address translation (NAT) and
triangular routing eliminated

Seamlessly integrate with existing data center infrastructure

• Active Directory

• DNS server

• RADIUS server

• Time server

Page 3-20
Network Configuration
Support for Active Directory: Introduction

Support for Active Directory: Introduction


This section provides information on how to authenticate with Active Directory.

Support for Active Directory: Feature Details

What is it?
• It enables customers to perform their HCP user administration in Active
Directory and use it for HCP user/account authentication
• It merges management users and data access accounts into one user to
facilitate a consistent security experience

Benefits
• Allows customers to comply with corporate security policies and
procedures
• Includes HCP in a pool of devices that support single sign-on for users
• Has a single repository of users to access multiple HCPs
• Manage roles and access based on groups

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 3-21
Network Configuration
Active Directory: Configuration

Active Directory: Configuration

 Users setup in Active Directory (AD)


 Authentication for HCP (CIFS, REST and Management Consoles)
 Single sign-on avoids unnecessary login screens
 Setup roles and access permissions for AD groups on HCP
 AD users role/access based on which AD groups they belong to

© Hitachi Vantara Corporation 2018. All rights reserved.

Feature Details:

• Each tenant selects authentication (local, AD, RADIUS)

• Each tenant can be a separate Organization/Domain

• Supports AD Forests, Domains and Organizations

• HCP can use AD certificate to connect to AD

REST: Representational State Transfer

CIFS: Common Internet File System

Page 3-22
Network Configuration
Active Directory: Groups

Active Directory: Groups

 New checkbox to opt-out of


adding computer account to
groups

 New radio button to select


the level of SSO support
desired

 New text field to enter root


domains of trusted forests, if
any

© Hitachi Vantara Corporation 2018. All rights reserved.

Trusted Forest list is comma-separated.

AD can be joined either with or without domain certificate (SSL).

Domain name and domain user credentials are required to make the connection.

 Once the connection to AD is


established, AD groups
appear in SMC

 Then they are treated the


same way as local HCP
accounts

 Multiple forests are supported

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 3-23
Network Configuration
Module Summary

Module Summary

 In this module, you should have learned to:


• Identify network interfaces

• Integrate Hitachi Content Platform (HCP) with Domain Name System (DNS)

• Configure virtual networks

• Understand concepts of IPv6 and link aggregation

• Integrate HCP with Active Directory (AD)

© Hitachi Vantara Corporation 2018. All rights reserved.

Module Review

1. How many IP addresses are assigned to a node without VLANs?

2. What network connectors are used in HCP systems?

3. What network connection is used to connect S nodes?

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 3-24
4. Administration
Module Objectives

 Upon completion of this module, you should be able to:


• Describe the purpose of Hitachi Content Platform (HCP) management
consoles
• Describe system and tenant users and their roles and permissions
• Apply permission masks and register new storage components
• Create storage pools and storage tiering policies – service plans
• Apply service plans to tenants and namespaces
• Add S Series node

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 4-1
Administration
HCP Consoles

HCP Consoles
This section covers information on HCP consoles.

How to Access HCP GUIs

System
User https://172.25.1.59:8000
Account
https://10.0.0.59:8000
https://admin.hcp.hitachi.com:8000
System Management Console

Tenant
User
Account
https://t-name.hcp.hitachi.com:8000
Tenant Management Console https://t-name.hcp.hitachi.com:8888
Tenant Search Console
https://ns-name.t-name.hcp.hitachi.com
Namespace
Tenant User
Account

© Hitachi Vantara Corporation 2018. All rights reserved.

• System console can be accessed via any front-end or back-end IP address providing you
specify port 8000

• Tenant can be accessed with DNS name and port 8000

• To access data (Namespace Browser) do not specify any port number

• System Management Console: https://admin.hcp.hitachi.com:8000

• Tenant Management Console: https://t-name.hcp.hitachi.com:8000

• MQE Search Console: https://t-name.hcp.hitachi.com:8888

• Namespace Browser: https://ns-name.t-name.hcp.hitachi.com

Page 4-2
Administration
System Management Console

System Management Console

Configuration:
Requires Service role
• Branding
Tenants:
• Create new tenants • DNS
• View/edit tenant • Miscellaneous
details • Monitored Components
Security: • Networks
• Permissions • Time
• Domains and • Upgrade
Services:
certificates
• Schedule
• Network Security
• Compression Monitoring:
• Console Security
• Content Verification • System Events
• MAPI
• Duplication • Resources
• Search Security
Elimination • Syslog
• Users
• Garbage Collection • SNMP
• Authentication
• Replication • Email
• RADIUS
• Search • Charge Back
• Shredding • Internal Logs
© Hitachi Vantara Corporation 2018. All rights reserved.

• The F5 keystroke causes the window to refresh

• Configuration menu is visible only to users with service role

• User management is visible only to users with security role

• Tenant, storage and service management is visible only to users with admin role

• It is possible to grant multiple roles to one user

Page 4-3
Administration
Tenant Management Console

Tenant Management Console

© Hitachi Vantara Corporation 2018. All rights reserved.

• Create new namespaces

• View/edit namespace

o Overview

o Policies: Indexing, Metadata, Retention, Shredding

o Services: Disposition, Protection, Replication, Search

o Compliance: Privileged Delete, Retention Classes

o Protocols: HTTP, NFS, CIFS, SMTP

o Monitoring: All events, Compliance events, Irreparable objects

o Settings: ACLs, Compatibility, Retention Mode, Tags

Tenant admin manages tenant user accounts, data permissions and namespaces.

Page 4-4
Administration
Namespace Browser

Namespace Browser

 Use user account with data access permission:


https://namespace.tenant.<cluster-domain-suffix>
Click to delete an object if Deletion Allowed or the Retention
period has expired, if the object is under retention, the little trash
can is not displayed

Click to list versions of an object

© Hitachi Vantara Corporation 2018. All rights reserved.

Namespace, browser is very good for seeing what is in the namespace although it is not very
useful to upload data as you can upload only one file at a time.

System Users
This section describes system users and their roles and permissions.

Page 4-5
Administration
User Roles: System Management Console

User Roles: System Management Console

 Monitor role

 Administrator role

 Security role

 Compliance role

 Search role

 Service role

© Hitachi Vantara Corporation 2018. All rights reserved.

Monitor role

• Grants permission to use the System Management Console to view the system status
and most aspects of the platform configuration

• Cannot view user accounts

Administrator role

• Grants permission to use the Administration Console to view the system status and
perform most platform configuration activities

• Cannot view or configure user accounts

Security role (the only role of the default starter account after a system build)

• Grants permission to use the System Management Console to view the system status
and create and manage user accounts

• Can perform platform configuration activities reserved for security users

• Cannot perform platform configuration activities reserved for users with the
administrator role

Page 4-6
Administration
User Roles: System Management Console

Compliance role

• Grants permission to use the Tenant Management Console to:

o Work with retention classes and retention-related settings

o Perform privileged deletes

o Use the System Management Console to view HCP system status

Search role

• Grants permission to use the Search Console (all activities)

Service role

• Grants permission to use the System Management Console to view the HCP status and
perform advanced system reconfiguration and management activities

• Cannot view or configure user accounts

Page 4-7
Administration
User Authentication

User Authentication

 Local Authentication (by HCP)

 Radius Authentication

 Active Directory (AD)

 OpenStack Keystone

© Hitachi Vantara Corporation 2018. All rights reserved.

When logging in to one of the Hitachi Content Platform (HCP) consoles, or APIs the user needs
to be authenticated by 1 of 3 methods.
User authentication can be local, remote (Radius), or Active Directory).
• Local Authentication (by HCP)
o The user’s password is stored in the platform
o HCP checks the validity of the login internally
• Radius Authentication
o HCP securely sends the specified username and password to a RADIUS server for
authentication
o The RADIUS server checks the validity of the login and sends the result back to
the platform
o HCP allows user access to the target console or API
• Active Directory (AD)
o HCP securely sends the specified username and password to AD for
authentication
o If the credentials are valid, HCP allows user access to the target console or API
• OpenStack Keystone
o A Keystone Authentication Token service was introduced with Hswift API and can
be used when HCP solution is integrated with OpenStack

Page 4-8
Administration
Starter Account

Starter Account

 Only one account exists after a fresh HCP installation:


• Username: security
• Password: Chang3Me!
• Roles: Security
• Authentication: Local
• Enabled
• Password change required

© Hitachi Vantara Corporation 2018. All rights reserved.

• You can delete this account after creating another locally authenticated account with the
security role

• HCP enforces the existence of at least one locally authenticated security account at all
times

Tenant Users
This section describes tenant users and their roles and permissions.

Page 4-9
Administration
Tenant-Level Administration

Tenant-Level Administration

 Tenants, except the default tenant, have their own administrative user
accounts for access to the Tenant Management Console

 Tenant security administrators define tenant-level user accounts in the


Tenant Management Console

 HCP system-level users with the monitor, administrator, security, or


compliance role automatically have access to the Tenant Management
Console for the default tenant

 An HCP tenant can grant system-level users administrative access to


itself

© Hitachi Vantara Corporation 2018. All rights reserved.

• Tenants, except the default tenant, have their own administrative user accounts for
access to the Tenant Management Console

o The roles available are monitor, system, security and compliance

• Tenant security administrators define tenant-level user accounts in the Tenant


Management Console

• HCP system-level users with the monitor, administrator, security, or compliance role
automatically have access to the Tenant Management Console for the default tenant

o The default tenant does not have administrative users of its own

• An HCP tenant can grant system-level users administrative access to itself

o This enables system-level users with the monitor or administrator role to log into
the Tenant Management Console for that tenant, or to access the Tenant
Management Console directly from the System Management Console

o For the default tenant, this access is enabled automatically and cannot be
disabled

Page 4-10
Administration
Tenant User Account

Tenant User Account

 To access the data in an HCP namespace, users and applications must


present valid credentials

 These credentials are defined by a tenant user account which specifies


the following:
• A username and password
• Namespaces the user or application can access – the same user could have
a different user accounts for several namespaces
• Operations (permissions) the user or application can perform in each of those
namespaces

© Hitachi Vantara Corporation 2018. All rights reserved.

Tenant User Account Creation

Tenant

Tenant user accounts provide access to namespace


data through:
• REST API
• Namespace Browser
• Search Consoles
© Hitachi Vantara Corporation 2018. All rights reserved.

Page 4-11
Administration
Data Access Permissions Example

Data Access Permissions Example

Tenant

© Hitachi Vantara Corporation 2018. All rights reserved.

Namespace permissions can be:

• Browse – Will allow Namespace Browser login but will not allow any other operation,
including read or write

• Read – Allows the user to read a file

• Write – Allows the user to write a file

• Delete – Allows the user to delete a file which is not Write-Once, Read-Many (WORM)
protected and does not have multiple versions

• Purge – Allows the user to delete all versions of a file which is not WORM protected and has
multiple versions

• Privileged – Allows the user to perform privileged delete operation for WORM protected files
in a namespace running in Enterprise mode for which the user must also have compliance
management role

• Search – Allows the user to log in into tenant search console and perform search operations

• Read ACL and Write ACL – Active Directory related, further information can be found in the
documentation

• Change Owner – change namespace owner

• Allow namespace management – used when using third party HS3 API clients

Page 4-12
Administration
Permission Masks

Permission Masks
In this section, you will learn how to apply permission masks and register new storage
components.

 Data Access Permission Mask determines which operations are allowed


• Masks are set at the system, tenant, and namespace levels
• The effective permissions for a namespace are the operations that are
allowed by the masks at all 3 levels – it is an aggregate of the 3 masks

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 4-13
Administration
Permissions Classifications

Permissions Classifications

Permission Description Additional Details


Read • Read and retrieve objects, metadata, and • De-selecting read automatically
list directory contents de-selects search
Write • Add objects to a namespace, modify
metadata, and add/replace custom
metadata
Delete • Delete objects and custom metadata from a • De-selecting delete automatically
namespace de-selects purge
Purge • Delete all versions of an object with a • To purge, delete must also be allowed
single operation • Selecting purge automatically selects
delete
Privileged • Audited delete or purge objects, including
objects under retention
Search • Use Search Console to search • To search, read must also be allowed
namespaces • Selecting search automatically selects
read
© Hitachi Vantara Corporation 2018. All rights reserved.

• System user with security role can enable/disable permissions across HCP

o Affects all tenants and their namespaces

• Tenant user with security role can enable/disable permissions

o Across all namespaces

o For each namespace

Page 4-14
Administration
System-Wide Permission Mask

System-Wide Permission Mask

System user with Security role sets


the System-Wide Permission Mask

System

© Hitachi Vantara Corporation 2018. All rights reserved.

In case you disable delete operations using System-Wide Permission Mask, all delete operations
will be disabled for all tenants and their users. System can be put into read-only mode here by
disabling all write and delete operations.

Page 4-15
Administration
Tenant Permission Mask

Tenant Permission Mask

Tenant

Tenant user with Security role can edit the permissions across all tenants
© Hitachi Vantara Corporation 2018. All rights reserved.

Tenant permission mask can override tenants users permissions. If you disable write operations
using Tenant Permission Mask, no user of this tenant will be able to write data. Other tenants
will not be affected by this change.

Page 4-16
Administration
Namespace Permission Mask

Namespace Permission Mask

Tenant

Tenant user with Security role can edit


the permissions for a namespace
© Hitachi Vantara Corporation 2018. All rights reserved.

Namespace Permission Mask allows you to disallow certain operations for all namespace users.
For example, you can disable delete operations using Namespace Permission Mask for all tenant
users. Other namespaces within a tenant will not be affected.

Permission Masks: Example

System > Security > Permissions


System

Disable delete (also disables purge)

Tenant

Tenant > Overview > Permissions Disable write


Namespace > Overview > Permissions Namespace
All permissions enabled on the namespace
Delete and purge disabled at the system level
Write disabled at the tenant level
© Hitachi Vantara Corporation 2018. All rights reserved.

Page 4-17
Administration
Storage Component Administration

Storage Component Administration


This section will cover storage component administration.

Storage Overview

 Single pane management


• Capacity utilization metrics
across storage tiers
 Per component
 Per pool

• Service plan usage across


tenants and namespaces
• Extended storage statistics
 Total objects and bytes tiered

• Metadata-only statistics
 Metadata-only object count
and bytes saved

© Hitachi Vantara Corporation 2018. All rights reserved.

Storage Components

 Manage all components


from a single view
• Usage
• Status
• Alerts

 Dive into a specific


component, for detailed
info
• Metrics
• Settings
• Advanced Options
© Hitachi Vantara Corporation 2018. All rights reserved.

Page 4-18
Administration
Storage Component Advanced Options

Storage Component Advanced Options

 Each cloud storage component


supports a number of advanced
options

 Many options may be used to tweak


various settings

© Hitachi Vantara Corporation 2018. All rights reserved.

• Each cloud storage component supports a number of advanced options, dynamically


queried from the underlying cloud adapter

• These options may be used to tweak various settings which are as follows:

o Configuring a web proxy for REST traffic

o Changing default ports for HTTP/HTTPS

o Enabling or disabling API features

o Adding additional data integrity validation

o Modifying connection and socket timeouts

Page 4-19
Administration
Storage Pools

Storage Pools

 Manage all pools from a single


view
• Usage
• Alerts

 Dive into a specific pool, for


detailed info
• Metrics
• Settings
• Advanced Options

© Hitachi Vantara Corporation 2018. All rights reserved.

Service Plans – Tiering Policy

 Manage all service plans from


a single view
• Status
• Utilization by tenant/namespace

 Dive into a specific service


plan, for detailed info
• Metrics
• Tier management
• Bulk assign service plans

© Hitachi Vantara Corporation 2018. All rights reserved.

A Service Plan is a tiering policy. Multiple service plans can be created. Each namespace can be
configured with a service plan.

Page 4-20
Administration
Service Plan Assignment and Utilization

Service Plan Assignment and Utilization

 Each service plan provides a


convenience User Interface (UI)
to assign a specific service plan
to one or more tenants

 A similar UI exists in at the


tenant level for assigning
service plans to one or more
namespaces

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 4-21
Administration
Service Plan Wizards – Tier Editor

Service Plan Wizards – Tier Editor

 Build and edit tiering strategies by selecting data copy counts on


specific storage pools

© Hitachi Vantara Corporation 2018. All rights reserved.

Number of data and metadata copies that should be held on different tiers can be setup here.
Data will be tiered based on:
• Number of days the files were not accessed
• Threshold – after certain utilization of primary storage is reached
o It is important to emphasize that the percentage (%) set as the Threshold is with
respect to the total utilization of the entire initial Tier (for example, of Primary
Running) to which the Service Plan will be applied. It is not with respect to the
Quota of the Namespace to which the Service Plan will be applied
• When you start an active-passive replication over a namespace with a configured
Service Plan (other than the default Service Plan), a Service Plan with the same name as
in the originating HCP (although its definition may be different) must exist in the HCP
destination of the replica
• If the replication is active-active, it should exist the same Service Plan name in both
HCPs
• It is not recommended to apply a Service Plan that migrates the data to an HCP S series
in a namespace with enabled CIFS or NFS
• Any Service Plan change in a namespace with data can mean the movement of a lot of
information
• Combination of both

Page 4-22
Administration
Service Plan Wizards – Import Creation

Service Plan Wizards – Import Creation

 Optionally import tiering strategies from existing service plans

© Hitachi Vantara Corporation 2018. All rights reserved.

Data can be imported from existing pools and service plans which simplifies the configuration
process.

Storage Reports

 Generate detailed storage reports


in CSV format for
• Specific component
• Specific pool
• Specific plan
• All components, pools, and/or plans

 Granular control
• Day, hour, or total reporting intervals
• Limit start/end dates
• Chargeback UI updated to include
the same features

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 4-23
Administration
Storage Retirement

Storage Retirement

 Efficiently migrate content of specific


storage

 Retirement supported for


• Extended Storage
• Primary Storage (ATR)

 Retirement options:
• Entire Pool
• Entire Component
• Specific Volume

 Monitor and control progress

© Hitachi Vantara Corporation 2018. All rights reserved.

ATR – Autonomic Tech Refresh

HCP migration service for back-end block storage.

Certificate Trust Store

 Manage SSL certificates for


extended storage
• Upload and manage certificates for
trusted remote cloud services
• Can automatically add certificates to
the trust store during component
creation, if HCP could not verify
whether remote system is trusted

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 4-24
Administration
HCP S10 and HCP S30 Nodes

HCP S10 and HCP S30 Nodes


This section covers information on HCP S10 and HCP S30 nodes.

Manage HCP S10 and HCP S30 Nodes

HCP Hardware Page


 Add wizard creates
S storage component and
S pool
 S node alerts are also
shown on overview page
 Link for detail S view

© Hitachi Vantara Corporation 2018. All rights reserved.

• The HCP S10 nodes are added to the HCP from the HCP hardware page

• When you click on Add Node in the HCP S series nodes section, a add node wizard starts

• In few steps, you complete the process and the HCP S10 storage component is created
and added to an existing HCP S10 storage pool or a new HCP S10 storage pool is
created

• After this is completed, the user adds the HCP S10 storage pool to the service plan for
one or more namespaces (if not already done)

• On the HCP overview page, alerts are displayed on HCP S10

• A link brings you to the relevant detail

• On the S nodes detail page, there is a link to login to the individual node to perform
maintenance procedures

Page 4-25
Administration
HCP S10 Node – Manage S Nodes

HCP S10 Node – Manage S Nodes

HCP hardware page


 Add wizard creates
S storage component and
S pool
 S node alerts are also
shown on overview page
 Link for detail S view
 S node overview
 Component status view
 Start a disk replacement

© Hitachi Vantara Corporation 2018. All rights reserved.

• The HCP S series node console provides more detailed status information

• For example, it has all components visualized including status LEDs

• A complete map of the disks is available with individual status

• When a disk failed, you can start a maintenance procedure to replace the disk

• Multiple disks can be replaced in one procedure

• An easy to follow step process guides for completion

Page 4-26
Administration
HCP S Series Storage – Ingest Tier

HCP S Series Storage – Ingest Tier

 Ingest directly to S Series storage


• HCP S10 or HCP S30

 Object data never lands on HCP


Internal HDDs or SAN storage

 Metadata always stored on internal


HDDs

 No more tiering backlog

 No need for large cache

© Hitachi Vantara Corporation 2018. All rights reserved.

This feature will allow data to be ingested directly to either an HCP S10 or HCP S30 storage.
Before v7.2, this process would be done in two steps, first ingesting data to Primary running
storage, then to have the tiering service transfer the data to the S Series. Now the data will no
longer need to land on Primary running first and can be passed directly to the S Series by HCP.
This eliminates the tiering backlog, which can be a major bottleneck in systems that see heavy
traffic. In addition, we can greatly reduce the storage space needed on Primary running
because data is never stored there.

Page 4-27
Administration
Write Through to S Series Storage

Write Through to S Series Storage

 Multi PB Capacity without Array


• Max ~336 PB Usable @ 80 HCP S30 Nodes

+
 Great Performance
• No Tiering Delay
• Single Put with S3 Scavenging Metadata
• HCP S30 Performance Enhancements

 S Series Ease of Use

© Hitachi Vantara Corporation 2018. All rights reserved.

Writing directly to S Series storage allows a customer to have greater storage capacity without
attaching an array to HCP. Unlike setting an S Series as the second tier, there is no tiering delay
because we do not need to wait for the tiering service to run. Also, new in v7.2, we will make
use of the AWS headers to use a single transaction to put both data and S3 Scavenging
Metadata, rather than needing two transactions as in previous releases, which offers a
considerable speed boost. Additionally, the soon to be released HCP S30 has many performance
enhancements over the HCP S10. Just as for data that is tiered to an S Series, a tenant user will
not notice any functional difference between data that has been ingested to Primary running or
an S Series.

Page 4-28
Administration
Write Through to S Series Storage

© Hitachi Vantara Corporation 2018. All rights reserved.

It is very straightforward to modify a service plan to use the Write Through to S Series Storage
feature. Simply edit the first tier in the service plan and check an HCP S Series storage pool. All
the products that are not in the S Series pools will automatically become unchecked. As always,
all metadata is stored on Primary running.

© Hitachi Vantara Corporation 2018. All rights reserved.

There is no rehydration when using this feature because all data will be stored on the S Series,
or other higher tiers.

Page 4-29
Administration
Write Through to S Series Storage

© Hitachi Vantara Corporation 2018. All rights reserved.

When data is ingested, it will appear as though it has already been tiered to the S Series.

Page 4-30
Administration
Module Summary

Module Summary

 In this module, you have learned how to:


• Describe the purpose of Hitachi Content Platform (HCP) management
consoles
• Describe system and tenant users and their roles and permissions
• Apply permission masks and register new storage components
• Create storage pools and storage tiering policies – service plans
• Apply service plans to tenants and namespaces
• Add S Series node

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 4-31
Administration
Module Review

Module Review

1. What targets are eligible for HCP based tiering?

2. How do you disconnect a storage component?

3. How do you start using HCP S10 and HCP S30 nodes?

4. How long must the data stay on primary storage before tiering to HCP
S10 or HCP S30 node?

5. How do you configure which storage pool namespace should use?

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 4-32
5. Ingestion Processes
Module Objectives

 Upon completion of this module, you should be able to:


• Use Namespace browser to archive and access files
• Configure and enable the Common Internet File System (CIFS) protocol
• Map the data and metadata directories
• Create namespace profile for the Hitachi Content Platform (HCP) tenant
• Describe HCP data migrator
• Use Representational State Transfer (REST) API to archive and access files
• Understand usage of HS3 API
 Usage of HS3 API to perform Multipart Upload (MPU)

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 5-1
Ingestion Processes
Namespace Browser

Namespace Browser
This section covers Namespace browser.

Namespace Browser: Objects

Data Access

© Hitachi Vantara Corporation 2018. All rights reserved.

• Namespace browser available for all namespaces

• Can be used for uploading data

• Only one file at a time can be uploaded

CIFS and NFS


In this section you will learn about CIFS and NFS.

Page 5-2
Ingestion Processes
CIFS and NFS Support

CIFS and NFS Support

 CIFS and Network File System (NFS) should be used only for migration or
application access

 CIFS can be authenticated only with AD whereas NFS on HCP cannot be


authenticated (only anonymous user access)

 In case, CIFS and NFS is not used on HCP


• It is recommended to enable namespace cloud optimization

 Namespace will be accessible only with HTTP(S) based APIs

 Performance gain is about 8%

© Hitachi Vantara Corporation 2018. All rights reserved.

• HCP is NOT a NAS device

• CIFS and NFS performance is worse than HTTP(S) based access

• If namespace cloud optimization is enabled, CIFS, NFS and SMTP cannot be used

• Namespace cloud optimization can be disabled only if now data was written to the
namespace

• Once there is a write to cloud optimized namespace, CIFS, NFS and SMPT can never be
enabled for this namespace

• Use with care

Page 5-3
Ingestion Processes
Enable CIFS Protocol

Enable CIFS Protocol

Tenant

Note: Limit of 50 namespaces can provide CIFS/NFS access


© Hitachi Vantara Corporation 2018. All rights reserved.

Network Drive Mapping in Microsoft® Windows

 Use HCP DNS names


 Map data and metadata as network drives
for example, Z:\, Y:\
 Syntax of address:
\\nsname\tenantname\hcpname\ten
antname_nsname\data
\\nsname\tenantname\hcpname\ten
antname_nsname\metadata
 Example:
\\corporate.hitachi.hcp.archive
.com\hitachi_corporate\data
... map as Z:\
\\corporate.hitachi.hcp.archive
.com\hitachi_corporate\metadata
... map as Y:\ © Hitachi Vantara Corporation 2018. All rights reserved.

Page 5-4
Ingestion Processes
Microsoft Windows Mounted Disks

Microsoft Windows Mounted Disks

 Windows system indicates that the 2 file systems are mounted

 Open each disk and you see the following:

DATA

METADATA
© Hitachi Vantara Corporation 2018. All rights reserved.

CIFS Access: An Open Standards Approach

Familiar File System May


Interface data 21
.lost+found 2036
• Browse folders and
subfolders and RFP.doc Original File name
content metadata and format
• Preview stored .directory-metadata
content info
settings Metadata presented
• WORM protection
RFP.doc as file system objects
core-metadata.xml
created.txt Retention period
Authenticity established hash.txt managed for
with standard hash retention.txt each object
algorithms
dpl.txt
index.txt Number of copies
shred.txt maintained
tpof.txt
© Hitachi Vantara Corporation 2018. All rights reserved.

Page 5-5
Ingestion Processes
Set Retention Period

Set Retention Period

 Navigate to an objects metadata and double-click on the retention.txt file to edit


the retention setting

This is what the file initially contains.

You delete the initial contents


and enter the A+4m string.
METADATA

HCP encodes the A+4m string.

© Hitachi Vantara Corporation 2018. All rights reserved.

As discussed earlier in the course, the default allows deletion.

Note: In lab, you will change this to A+4m to set a retention period of 4 minutes.

Default Tenant
This section covers information on default tenant.

Page 5-6
Ingestion Processes
Enable Creation of Default Tenant

 Default tenant is a legacy feature to support migrations from old Hitachi


Content Archive Platform (HCAP) prior to version 3.0

 Not supported for new installations

 Should be used only in relevant cases

© Hitachi Vantara Corporation 2018. All rights reserved.

Note: Hitachi Content Archiving Platform (HCAP) is an obsolete product.

Enable Creation of Default Tenant

System

© Hitachi Vantara Corporation 2018. All rights reserved.

System Management console user with service role has to enable the possibility to create
default Tenant.

Page 5-7
Ingestion Processes
Create Default Tenant or Namespace

Create Default Tenant or Namespace

 Make your selections


• Make default tenant/namespace
• Enable Search

 Accept the following


• DPL = Dynamic (2)
• Hash = SHA-256
• Hash cannot be changed later!!!

 Click Create Tenant

© Hitachi Vantara Corporation 2018. All rights reserved.

HCP Data Migrator


This section covers information on Hitachi Content Platform (HCP) Data Migrator (DM).

Page 5-8
Ingestion Processes
Overview

Overview

 HCP Data Migrator (HCP-DM) is a utility for copying, managing and


deleting data

 Supported operations include copying between two locations in a single


namespace (including an archive) or local file system

 Data can be deleted from any of the locations listed above not under
retention

 As of HCP v8.x, HCP-DM has been moved into Open Source, and is
available for download from github
• https://github.com/Hitachi-Data-Systems/Open-DM

© Hitachi Vantara Corporation 2018. All rights reserved.

• When copying data from one location to another, the source and destination locations
can be in combination of:

o A local file system, including remote directories accessed using the local file
system

o An HCAP archive

o An HCP authenticated namespace or default namespace

• Available as a GUI or CLI

Page 5-9
Ingestion Processes
Installation

Installation

 HCP-DM is built into the Tenant Management Consoles

 Install HCP-DM on Windows


• Copy hcpdm.exe to the top level directory where you wish to create the folder
for the application
• Double-click the file to run the installation

 Install HCP-DM on Unix/Linux


• Simply uncompress/unzip the .tgz file, for example:
 Copy hcpdm.tgz to the top-level directory where you wish to create the folder for the
application

© Hitachi Vantara Corporation 2018. All rights reserved.

Migration Panes

 Main window contains 2 identical panes separated by transfer buttons << and >>
 Same functionality supported in both panes of DM GUI
Select Local File System
or namespace profile

View current directory path


or select recently viewed
path

Main window panes © Hitachi Vantara Corporation 2018. All rights reserved.

Page 5-10
Ingestion Processes
Namespace Profile Manager: Create Profile

Namespace Profile Manager: Create Profile

 To migrate items using HCP-DM, create


namespace profiles
 Namespace profiles can be used as the source
or the target profile for a migration
 Launch the Namespace Profile Manager, then
click Create

© Hitachi Vantara Corporation 2018. All rights reserved.

Namespace Profile Manager: Edit or Delete Profile

 Edit or delete a namespace profile

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 5-11
Ingestion Processes
Set Preferences: Policies

Set Preferences: Policies

 Set policies
• Indexing
• Shredding
• Retention method
• Retention hold

© Hitachi Vantara Corporation 2018. All rights reserved.

Set Preferences: POSIX Metadata

 Set Portable Operating System


Interface (POSIX) ownership
and permissions
• UID
• GID
• Object permissions
• Directory permissions

 Applies to default
namespace and
HCAP 2.x archives

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 5-12
Ingestion Processes
Set Preferences: Owner

Set Preferences: Owner

 Specify the user that owns


objects copied from the local file
system to a HCP 5.0 namespace

Owner Type Description


Profile user The object is owned by the profile user
If you select this option and the namespace profile is configured for anonymous access,
the object has no owner
No Owner The object has no owner
Local User The object is owned by a user that is defined in HCP
Type the username of a user account that is defined in HCP
External User The object is owned by an Active Directory
Type the username of an Active Directory user account and the domain in which the
account is defined
© Hitachi Vantara Corporation 2018. All rights reserved.

HCP-DM CLI

 Command Line Interface (CLI) provides functionality similar to the existing HCP
client tool arcmv

 CLI also facilitate scheduling capability

 CLI commands
• hcpdm copy – write each source file to the target destination
 If a file with the same name exists, it will fail
 With versioning enabled, it will create a new version

• hcpdm delete – deletes the specified file


• hcpdm profile – create or delete a profile, or list namespace profiles
• hcpdm job – list or delete saved jobs

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 5-13
Ingestion Processes
REST API

REST API
This section provides information on REST API.

What Is RESTful Interface?

 From Wikipedia
• Representational State Transfer (REST) is a style of software architecture
for distributed systems such as the World Wide Web
 REST has emerged as a predominant Web service design model when using the
HTTP protocol

 In a nutshell:
• Multiple requests while similar in form, actual usage may have different
meaning depending on the receiver of the request
HCP Request Amazon S3 Request
GET /my-image.jpg?type=acl GET /my-image.jpg?acl
Accept: application/xml
© Hitachi Vantara Corporation 2018. All rights reserved.

Management API (MAPI) together with Replication API and Search API are all REST-ful
Interfaces, that can influence transactions on a system. MAPI must be enabled on the System
level and also on the Tenant level in order to work.

Page 5-14
Ingestion Processes
Simplified REST Example

Simplified REST Example

Resource URI Resource Qualifier


Request Method Client Request
GET /rest/myfolder/my-image.jpg?type=acl HTTP/1.1
Host: medical.acme.hcp.example.com
Request Headers Authorization: HCP
bXl1c2Vy:3f3c6784e97531774380db177774ac8d

HTTP/1.1 200 OK Server Response


Response Status Code
Last-Modified: Wed, 25 Apr 2012 09:53:47 GMT
ETag: "8d604138ffb0f308a8552a3752e5a1be"
Standard Headers Content-Type: image/jpeg
Content-Length: 679116
X-HCP-Time: 1336490468
Expanded Headers
X-HCP-SoftwareVersion: 6.1.1.24
X-HCP-Type: object
X-HCP-Hash: SHA-256
Resource Content 36728BA190BC4C377FE4C1A57AEF9B6AFDA98720422960
<response-body>
© Hitachi Vantara Corporation 2018. All rights reserved.

HCP RESTful Interfaces

 Main RESTful HCP Interfaces


• Data Access
 HCP REST – HCP Proprietary
 HS3 – Amazon S3 compatible
 Swift – Open Stack compatible

• Metadata Query API


 Operational – Query based on object operations
 Object-based – Query based on object metadata criteria

• Management API
 Configure tenant and namespaces, and replication
© Hitachi Vantara Corporation 2018. All rights reserved.

Page 5-15
Ingestion Processes
Anatomy of Request

Anatomy of Request

Exercise:

 Breakdown a sample PUT request using freeware curl command to


understand the elements

Command
curl -k –i -T my-image.jpg
-H “Authorization: HCP YWNtZXVzZXI=:a3b9c163f6c520407ff34cfdb83ca5c6”
https://medical.acme.hcp.example.com/rest/myfolder/my-image.jpg
?retention=A+5d&shred=true

© Hitachi Vantara Corporation 2018. All rights reserved.

Command
curl -k –i -T my-image.jpg
-H “Authorization: HCP YWNtZXVzZXI=:a3b9c163f6c520407ff34cfdb83ca5c6”
https://medical.acme.hcp.example.com/rest/myfolder/my-image.jpg
?retention=A+5d&shred=true

Execute curl command to:


 Issue a PUT Method (-T) to send local file my-image.jpg
 With output response displayed to screen (-i)
 And trust HTTPS self-signed certificates (-k)

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 5-16
Ingestion Processes

Command
curl -k –i -T my-image.jpg
-H “Authorization: HCP YWNtZXVzZXI=:a3b9c163f6c520407ff34cfdb83ca5c6”
https://medical.acme.hcp.example.com/rest/myfolder/my-image.jpg
?retention=A+5d&shred=true

Provide HCP authorization credentials for the tenant in the form of:
 Form: <base64-username>:<md5sum-password>
 See Using a Namespace document for how to obtain the encoding of
username and password
© Hitachi Vantara Corporation 2018. All rights reserved.

Note: Namespace.pdf document can be downloaded from both System Management Console
and Tenant Management Console on HCP G10.

Command
curl -k –i -T my-image.jpg
-H “Authorization: HCP YWNtZXVzZXI=:a3b9c163f6c520407ff34cfdb83ca5c6”
https://medical.acme.hcp.example.com/rest/myfolder/my-image.jpg
?retention=A+5d&shred=true

Specify to send a request over HTTPS to:


 Namespace medical
 In acme tenant
 HCP with DNS name hcp.example.com

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 5-17
Ingestion Processes

Command
curl -k –i -T my-image.jpg
-H “Authorization: HCP YWNtZXVzZXI=:a3b9c163f6c520407ff34cfdb83ca5c6”
https://medical.acme.hcp.example.com/rest/myfolder/my-image.jpg
?retention=A+5d&shred=true

Write object:
 Using the /rest data access gateway
 In folder myfolder
 With object name my-image.jpg

© Hitachi Vantara Corporation 2018. All rights reserved.

Command
curl -k –i -T my-image.jpg
-H “Authorization: HCP YWNtZXVzZXI=:a3b9c163f6c520407ff34cfdb83ca5c6”
https://medical.acme.hcp.example.com/rest/myfolder/my-image.jpg
?retention=A+5d&shred=true

Specifying system metadata:


 Retention with a value of 5 days after creation of the object
 And object shred value to true to indicate to shred the object when
deleted
© Hitachi Vantara Corporation 2018. All rights reserved.

Page 5-18
Ingestion Processes
Using Programming Languages

Using Programming Languages

 Curl command is useful for single items or testing, but inefficient for
large volume usage

 Use programming languages like Java/C/C++/C#/Python to issue REST


commands
• Each language has a library that helps construct and execute HTTP REST
requests
 Apache HTTP Client for Java
 .NET/C# has its own native HTTP Client
 libcurl (curl.haxx.se/libcurl) is available as freeware for many languages and
platforms

© Hitachi Vantara Corporation 2018. All rights reserved.

Hitachi S3 (HS3) API


In this section, you will learn about Hitachi S3 API.

What Is HS3?

 Amazon S3 API compatible implementation in HCP


• More information available at:
 http://aws.amazon.com/s3/

 Provides opportunity for existing S3 enabled


applications to work with HCP

DragonDisk

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 5-19
Ingestion Processes
HS3 and Multipart Upload (MPU)

HS3 and Multipart Upload (MPU)

 HCP HS3 API — a RESTful, HTTP-based API that is compatible with


Amazon S3

 With the HS3 API, you can perform operations to create an individual
object by uploading the object data in multiple parts

 Perform multipart uploads


• POST object initiate multipart upload
• PUT object upload part
• POST object complete multipart upload

 List the parts of in-progress multipart uploads (GET object list parts)
© Hitachi Vantara Corporation 2018. All rights reserved.

• To use the HS3 API to perform the operations listed above, you can write applications
that use any standard HTTP client library. HS3 is also compatible with many third-party
tools that support Amazon S3

 Part and object size limitations

o Minimum part size: 5MB

o Except: Last part can be 1byte-5GB

o Maximum part size: 5GB

o Number of parts: 1-10,000

o Maximum size of MPU object: 5TB

Page 5-20
Ingestion Processes
S3 Basic Concepts

S3 Basic Concepts

 S3 Service – Internet based offering

• Maps to HCP Tenant

 S3 Account – Subscriber to the service

• Maps to HCP User

 S3 Bucket – Fundamental container for storage

• Maps to HCP Namespace

 Object – Individual item to be stored

• Same concept in HCP


© Hitachi Vantara Corporation 2018. All rights reserved.

Page 5-21
Ingestion Processes
How to Make S3 Requests

How to Make S3 Requests

 Complex authentication mechanism inhibits using curl and HTTP client


libraries
 Use Amazon SDK
• Download from http://aws.amazon.com/tools
• See HCP Using the HCP HS3 API documentation
• Working sample available in Hitachi Vantara Developer Network
 S3curl is a curl command line equivalent, but requires manually
changing endpoint in s3curl.pl file

© Hitachi Vantara Corporation 2018. All rights reserved.

To use the HS3 API as an authenticated user, you need to provide credentials that are based on
the username and password for your user account. To provide credentials, you typically use the
HTTP Authorization request header.
Authorization request header:
To provide credentials for AWS authentication using the Authorization header, you use this
format:
Authorization: AWS access-key:signature
In this format:
• access-key is the Base64-encoded username for your user account
• signature is a value calculated using your secret key and specific elements of the HS3
request, including the date and time of the request
• Your secret key is the MD5-hashed password for your user account
• Because the signature for an HS3 request is based on the request contents, it differs for
different requests
• Third-party tools that are compatible with HS3 typically calculate request signatures
automatically
• If you’re writing your own application, you can use an AWS SDK to calculate request
signatures

Page 5-22
Ingestion Processes
OpenStack Concepts and Terminology

HCP also supports presigned URLs:

• A presigned URL uses query parameters to provide credentials


• Presigned URLs allow you to temporarily share objects with other users without the
need to grant those users permission to access your buckets or objects
• Presigned URLs are compatible only with the AWS method of authentication

OpenStack Concepts and Terminology

 Swift – OpenStack object storage project

 Swift API – RESTful API for Swift

 HSwift – HCP’s Swift gateway/API

 Horizon – OpenStack Dashboard/Web UI

 https://www.openstack.org/

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 5-23
Ingestion Processes
Module Summary

Module Summary

 In this module, you should have learned to:


• Use Namespace browser to archive and access files
• Configure and enable the Common Internet File System (CIFS) protocol
• Map the data and metadata directories
• Create namespace profile for the Hitachi Content Platform (HCP) tenant
• Understand HCP data migrator
• Use Representational State Transfer (REST) API to archive and access files
• Understand usage of HS3 API

© Hitachi Vantara Corporation 2018. All rights reserved.

Module Review

1. What REST based interfaces are available on HCP?

2. Can HCP be used as NAS device?

3. How is HDI and HNAS communicating with HCP?

4. Where do I obtain HCP Data Migrator?

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 5-24
6. Search Activities
Module Objectives

 Upon completion of this module, you should be able to:


• Explain the Metadata Query Engine (MQE) search facility features
• Enable MQE indexing and search console
• Locate and display specific objects and display properties pertaining to that
object using the MQE search console
• Use MQE tool and search API
• Create a Tenant MQE search user

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 6-1
Search Activities
Overview

Overview

What is it?

• Search engine for system and custom metadata


• Built into all Hitachi Content Platform (HCP) systems
• No license required
• Indexing done by HCP

What it is not?
• A full-featured search engine for content

© Hitachi Vantara Corporation 2018. All rights reserved.

• Version HCP v4.x, MQE is the basic way to find a set of objects based on operation and time.
Example: find all the objects created between time A and time B
• You can now perform real search for sets of objects based on metadata (system and custom)
• The indexing and search engine is built into HCP
• You can search across tenant and namespaces to locate related sets of objects

Metadata Query Engine: Benefits

 Identify sets of related objects based on system and custom metadata


• Management example: Set litigation hold on all objects owned by John Smith
in email namespaces
• Application example: Give user Debbi read access to all John’s pictures in
the Cloud
• Can be fully customized using Content Classes

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 6-2
Search Activities
Metadata Query Engine: Details

Metadata Query Engine: Details

 Built-in: No additional hardware or software required


• Designed to scale with the cluster

• Can optionally be enabled any time per Tenant or Namespace

 Query via API, Hitachi Content Platform (HCP) MQE Search Console or
MQE Tool
 Conforms to HCP data access authorization security

© Hitachi Vantara Corporation 2018. All rights reserved.

• Return specific system metadata fields on query

o No need to issue subsequent object reads

• Retrieve list of deleted and pruned objects

• Bulk operation processing

Examples: Find all the emails in this namespace and put them on litigation hold or find
everything owned by Richard and give it to Scott.

Page 6-3
Search Activities
Metadata Query Engine: Qualifications

Metadata Query Engine: Qualifications

 Only valid XML in custom metadata is indexed


 MQE does not index or search object content data (only metadata)

 1MB per object of custom metadata is indexed


 Capacity consumed by index counts toward HCP licensed capacity

© Hitachi Vantara Corporation 2018. All rights reserved.

• Capacity impact guidelines:

o System metadata index size: ~340 bytes per object (very light)

o Custom metadata size: Typically ~1/2 of custom metadata size

o Impact on ingest: 2%-10%

• MQE indexing disabled by default: new and upgrades

Page 6-4
Search Activities
MQE and HDDS Search Differences

MQE and HDDS Search Differences

MQE Search Criteria General HDDS Search Criteria


Office Document
Email
XML
File Properties
NFS
CIFS
HCP
Miscellaneous
Object Type Object Contents Object Name URI Original URI Object Format Main Language Languages
Author Subject Title Categories Last Saved By Company Comments Last Printed
From To Cc Bcc Subject Sent Date Message ID Attachment Name
XML Contents
Mime Type Change Time Modify Time Access Time Size
UID GID Permissions
Owner Name ACL Type ACL Mask ACL User Name
Namespace Tenant Ingested Time Retention Retention time Retention Class Retention Hold Shredding
Custom metadata XML
Hash Type Hash Value
© Hitachi Vantara Corporation 2018. All rights reserved.

You can define multiple content classes and content properties (all of which show up in the
search GUI for the tenants they are defined on). Search criteria is now fully configurable.

Page 6-5
Search Activities
MQE Content Classes

MQE Content Classes

 Content classes serve as a blueprint for bringing structure to


unstructured content
 Classes consist of a set of user-defined Content Properties
• Each property provides the ability to extract a specific metadata field from
objects (for example, any custom metadata XML tag), index it efficiently
under a user-defined name with strong typing, and make it queryable

 Content classes group and organize a set of content properties into


named categories

© Hitachi Vantara Corporation 2018. All rights reserved.

• Custom metadata in a namespace can be indexed based on content properties. A


content property is a named construct used to extract an element or attribute value
from custom metadata that is well-formed XML

• Each content property has a data type that determines how the property values are
treated by the metadata query engine. Additionally, a content property is defined as
either single-valued or multi-valued. A multi-valued property can extract the values of
multiple occurrences of the same element or attribute from the XML

• Content properties are grouped into content classes, and each namespace can be
associated with a set of content classes. The content properties that belong to a content
class associated with the namespace are indexed for the namespace. Content classes
are defined at the tenant level, so multiple namespaces can be associated with the same
content class

• For example, if the namespace personnel is associated with the content class MedInfo,
and the content property DrName is a member of the content class, the query engine
will use the DrName content property to index the custom metadata in the Personnel
namespace

Page 6-6
Search Activities
MQE Content Classes

 Content classes are defined


on tenant level

 MQE, full custom metadata


indexing and namespace
indexing must be enabled

 On tenant management
console go to
Services/Search

© Hitachi Vantara Corporation 2018. All rights reserved.

This is visible from Tenant management.

 Configuration of class
properties

 MQE criteria is now fully


configurable

© Hitachi Vantara Corporation 2018. All rights reserved.

• Maximum number of content classes per tenant: 25

• Maximum number of content properties per content class: 100

Page 6-7
Search Activities
Enable HCP MQE Search Facility

Enable HCP MQE Search Facility

1. Log into the System Management Console and select Services 


Search

© Hitachi Vantara Corporation 2018. All rights reserved.

Note: The above screen shot shows the configuration of HDDS as the search console because
of the previous lab project. If no search console has been selected, then the configuration
would indicate Disable Search Console and the Query Status for both the MQE and HDDS
consoles would be indicating Unavailable.

2. Click on MQE in the Search Facility Settings panel


X

3. Check Enable indexing and click Update MQE Settings

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 6-8
Search Activities
Launch MQE GUI

Launch MQE GUI

https://TenantName.Qualified_DNS_Name:8888
For example: https://legal.hcap1.hcap1.local:8888

Log in using the tenant-level


user account credentials:
username: search
password: hds123

© Hitachi Vantara Corporation 2018. All rights reserved.

Structured Query: Size Metadata

The same object was found in


the Arbitration and Litigation
namespaces

Let’s narrow the search – See


next slide

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 6-9
Search Activities
Narrow Structured Search

Narrow Structured Search

1. Click the plus sign (+) to the right of the third box indicating the object size (14009749)
to add another query field

2. Select Namespace in the left panel and Litigation (Legal) in the right panel and click
the Query button

See next slide for output of search results


© Hitachi Vantara Corporation 2018. All rights reserved.

Narrowed Search Results

To perform a Control Operation (like delete), open the object or save it to a target
location

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 6-10
Search Activities
MQE Tool

MQE Tool

 MQE tool is MS Windows based


Application that connects to
HCP using Search API

 OpenSource @ Sourceforge

 https://sourceforge.net/projects/
hcpmetadataquer/

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 6-11
Search Activities
Module Summary

Module Summary

 In this module, you should have learned to:


• Understand Metadata Query Engine (MQE) search facility features
• Enable MQE indexing and search console
• Locate and display specific objects and display properties pertaining to that
object using the MQE search console
• Use MQE tool and search API
• Create a Tenant MQE search user

© Hitachi Vantara Corporation 2018. All rights reserved.

Module Review

1. Which interfaces are available for metadata search?

2. How many MQE consoles are there?

3. How do you configure a namespace for search?

4. Should MQE be installed in HCP?

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 6-12
7. Replication Activities
Module Objectives

 Upon completion of this module, you should be able to:


• Describe active-passive and active-active replication links
• Create replication link and authorize (confirm) the replication link to start the
flow of data
• Monitor the replication process
• Describe Hitachi Content Platform (HCP) failover procedure
• Describe replication verification service operations
• Identify load balancer role
• Describe Geographically distributed data protection (Geo-protection)

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 7-1
Replication Activities
Active – Passive Replication

Active – Passive Replication


This section covers information on active-passive replication.

Active – Passive Replication Overview

 Replication is asynchronous and object based


 Replicates selected top level directories for the default tenant/namespace
 Replicates selected tenants and all or selected namespaces
 Content is verified during replication
 Policies and services can use the replica to repair objects
 Objects can be retrieved from the replica if not found on the primary
Entire objects
• Data
• System metadata
• Custom metadata

Configuration data, logs, and so on


• Tenant user accounts, data accounts, admin logs
• Tenant configuration information
• Namespace logs, including compliance events
Waltham, U.S.A. • Namespace configuration information, including retention classes Sydney, Australia
(primary) (replica)
© Hitachi Vantara Corporation 2018. All rights reserved.

• Tenants and all associated data

• DNS top level directories and subset of data (retention classes, compliance logs)

• Backwards compatible — same support for 2.6 top level directories

• Set up and manage replication links remotely

• Support for multiple links and link types to build advanced topologies

• Schedule replication

• Pause/resume tenants for replication

• Opt namespaces in or out of replication

• Different DPLs local and remote for a namespace

Page 7-2
Replication Activities
Before You Begin

Before You Begin

 Ensure both primary and replica HCP systems have replication enabled
• Replication is no longer a licensed feature!

 When you replicate a tenant, a tenant with the same name cannot
reside on the replica system

 Both systems should be running the same software versions

 In case you will use a separate VLAN for replication, this VLAN should
be created prior replication setup

© Hitachi Vantara Corporation 2018. All rights reserved.

Required Steps for Replication

 To perform replication, required tasks are:


1. Enable replication on the system
2. Exchange SSL certificates to create a Trusted Relationship between the 2 HCP
systems
3. Start Replication setup wizard
4. Choose active-passive configuration
5. Create an Outbound Link at both systems if replicating bi-directional
 Outbound link converts to Inbound Link on the other system

6. Accept the link to start the data transfer


 Each HCP system accepts the Inbound Link from the other system

7. Monitor the links to view status


© Hitachi Vantara Corporation 2018. All rights reserved.

Page 7-3
Replication Activities
Active – Active Replication

Active – Active Replication


This section covers active-active replication.

Two Replication Link Types

Active-Active (new to HCP v7.0)


Active-Passive (existing)
 Enables global read/write tenants and
 Back up to a read-only replica system namespaces across a replication topology
 Optimized for disaster recovery scenarios  Synchronizes content between systems in
o Favors full object (data + metadata) both directions
synchronization
 Optimized for topology-wide access to
data
o Favors metadata-first synchronization

© Hitachi Vantara Corporation 2018. All rights reserved.

Link Creation Wizard

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 7-4
Replication Activities
Domain and Certificate Replication

Domain and Certificate Replication

 Securely back up installed


certificates for automated
restore in case of primary
system failure

 Support secure HTTPS


application failover to
remote systems

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 7-5
Replication Activities
Fully Automated Collision Handling

Fully Automated Collision Handling

 Last edit wins


• The latest edit of an object on either system will be retained under the original
path
• Collision losers will be handled according to namespace configuration
 Retention wins
• When retention object collisions occur, the retention value with the largest
keep time is retained
• When retention hold object collisions occur, the hold state is retained

© Hitachi Vantara Corporation 2018. All rights reserved.

• Last edit wins:

o If versioning is enabled, there will be no object collisions

o The latest configuration edits are maintained across the topology

• Annotations are merged:

o Annotations created on one side are added to the object on the remote side if
the same annotation is changed as latest edit wins

o Repair operations are now annotation aware

• Using operation-based query API:

To determine if there are any collisions in the system:

<queryRequest>

<operation>

<count>0</count>

<systemMetadata>

<replicationCollision>true</replicationCollision>

Page 7-6
Replication Activities
Fully Automated Collision Handling

</systemMetadata>

</operation>

</queryRequest>

To find collisions in ns1 since last week:

<queryRequest>

<operation>

<systemMetadata>

<namespaces>

<namespace>ns1.ten1</namespace>

</namespaces>

<changeTime>

<start>1375839364000</start>

<end>1475839364000</end>

</changeTime>

<replicationCollision>true</replicationCollision>

</systemMetadata>

</operation>

</queryRequest>

Page 7-7
Replication Activities
Fully Automated Collision Handling

 Collision winners retain the original object path

 Collision losers will be placed aside according to namespace


configuration

© Hitachi Vantara Corporation 2018. All rights reserved.

Collision losers:

• Can optionally be automatically deleted by the disposition service – configurable per


namespace

• Can be queried (and bulk processed) via MQE – both operation and object query are
supported

 Namespace-level control of collision losers:


• Move object to /.lost+found
• Rename object and store it in the same location – objects renamed to:
original_object_name.collision

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 7-8
Replication Activities
Querying Collisions With MQE

Querying Collisions With MQE

 Using object-based query API and Console


 Determine whether there are any collisions in the system:
replicationCollision:true
 Find collisions in ns1 this week:
+namespace:ns1.ten1 +replicationCollision:true
+(changeTimeMilliseconds:[1388811600 TO *]
 Using Metadata Query Engine Console:

© Hitachi Vantara Corporation 2018. All rights reserved.

Replication MAPI Support

 HCP now supports all replication operations via the management API
(MAPI), including:
• Link creation
• Link content selections
• Link status
• Link management
• Link schedule configuration
• Link monitoring
• Tenant backlog monitoring
• Failover lifecycles
© Hitachi Vantara Corporation 2018. All rights reserved.

Page 7-9
Replication Activities
Implementation Notes Overview

Implementation Notes Overview

 Active-active links remove the restriction that require tenants and namespaces to be fully
deleted on the remote side before being re-added to the link
 A mix of active-active and active-passive link is fully supported in a replication topology
• Use case: Active-active with common disaster recovery (DR) backup system
 Link type will be reported over in SNMP and in Hi-Track Remote Monitoring system
 Replication link can be moved to a separate Virtual Local Area Network (VLAN)
 Replication performance can be set up using replication schedule, priorities
Low/Medium/High/Idle/Custom
 Custom performance level can be set up in System Management Console (SMC)

© Hitachi Vantara Corporation 2018. All rights reserved.

• Active-Active links remove the restriction that require tenants and namespaces to be
fully deleted on the remote side before being re-added to the link

• A mix of active-active and active-passive link is fully supported in a replication topology

o Use case: Active-Active with common Disaster Recovery (DR) backup system

• Link type will be reported over in Simple Network Management Protocol (SNMP) and in
Hi-Track

• This release removes the Pending and Accept link workflow

• The creation of empty links (no content selections) is now supported so that connectivity
between sites can be verified

• Namespace level pruning period on replica has been removed and system honors single
pruning period

• Custom performance level can be set up in SMC. The default is 5 threads for replication

Page 7-10
Replication Activities
Active-Active Links Persist Metadata First

Active-Active Links Persist Metadata First

 Exact same format as metadata-only, including the local stub

 Minimizes the potential for object collisions in the topology

 Makes data accessible from remote systems as quickly as possible


• Objects are initially persisted as metadata-only
• Remote reads stream from the remote system and persist locally on demand
• Data is transferred later in the background
• Metrics reflect the state of transferred data, not metadata

© Hitachi Vantara Corporation 2018. All rights reserved.

At any given time, content may exist locally in a metadata-only form:

• Removing content selections from a link provides a warning when any metadata-only
objects will be orphaned as a result

• Metadata-only content can still be repaired/accessed as long as the tenant remains


enabled on the link configuration

Page 7-11
Replication Activities
Limits, Performance and Networks

Limits, Performance and Networks

 Increase limits:
• Maximum of 5 outbound links are supported
• Maximum of 5 inbound links are supported
• Maximum of 5 active-active links (counts as 5 inbound and 5 outbound) per
system

 Performance scales according to node count, replicating namespace


count, region count, and object count

 Inbound and outbound replication traffic can be segregated on its own


network, effectively creating a pipe between two HCP systems

© Hitachi Vantara Corporation 2018. All rights reserved.

Failover
This section provides information on failover.

Page 7-12
Replication Activities
Automatic Failover/Failback Options

Automatic Failover/Failback Options

Supported for all link types

© Hitachi Vantara Corporation 2018. All rights reserved.

• Failover is required in active-passive replication links to make replica tenant read and
write. The failover can be either manual or automated

• It is possible to manually failover in the Shared Memory Control (SMC) of a replica


system

• It is possible to set up automated failover – for example failover to replica after 120
minutes of no heartbeat, from primary HCP system

• In active-passive replication configurations, failover means:

o Making replica tenant read and write

o Making replica HCP system handle redirected DNS requests (if Automated DNS
Failover enabled)

• Once primary HCP system is back online, it is necessary to perform recovery process.
During recovery the replica is serving clients while it replicates new data to former
primary HCP system. Once the process is nearly finalized, both HCP systems must enter
Complete Recovery mode, during which the final synchronization is achieved. Once
data on both HCP systems are the exact mirror, primary HCP will become read and write
and starts serving clients. Replica HCP will resume its role as replica. During Complete
Recovery phase both HCP systems are read-only. Complete recovery procedure can be
manual or automated

Page 7-13
Replication Activities
Automatic Failover/Failback Options

• Automated DNS failover is typically used in active passive configurations. DNS is used to
redirect clients from a failed primary system to replica. This involves modifying the
secondary DNS forward lookup zone and replacing IP addresses of HCP primary with IP
addresses of HCP replica. Once failover to replica is triggered, replica HCP will start
handling these DNS request. There is no impact on clients. They keep accessing primary
HCP without knowing that DNS redirects them to HCP replica

• In active-active replication configurations load balancers should be used which


effectively removes the need for failover and recovery. Nonetheless, it is possible to
perform failover and recovery also in active-active (GAT) configurations

Page 7-14
Replication Activities
Active-Active Failover Scenario 1

Active-Active Failover Scenario 1

Use Case: Load balancers or application-controlled failover

 This is the preferred configuration, whenever possible

 Applications either:
• Use a load balancer to route requests to specific systems in the replication
topology
• Are made aware of the multiple systems in the replication topology and issue
requests directly to each system

 DNS failover and automatic failover/failback features should be disabled

© Hitachi Vantara Corporation 2018. All rights reserved.

Active-Active Failover Scenario 2

Use Case: Remote system failover using shared DNS


 This option works with existing active-passive configurations converted into
active-active links, using the existing workflow
 Assumes that the application to be failed over has no knowledge of the remote
system
 This configuration requires:
• A secondary zone definition in the corporate DNS server for each system
• The DNS failover option enabled in each HCP system
• An optional DNS automatic failover/failback

 System administrators can manually fail one system over to the remote side on
demand
© Hitachi Vantara Corporation 2018. All rights reserved.

Page 7-15
Replication Activities
Active-Passive Failover Scenario

Active-Passive Failover Scenario

1. Begin Failover

2. Restore Link

3. Begin Recovery

4. Complete Recovery

© Hitachi Vantara Corporation 2018. All rights reserved.

Begin Failover – clicking Failover begins the failover process

• DNS zone on HCP is updated accordingly to route locally

• If DNS Failover is enabled, the remote system becomes inaccessible via DNS

• Local system becomes writable and remote system (if accessible) becomes read-only

Restore Link – ensures that the link definition exists on the remote system

Begin Recovery – restore content from the local DR site to the primary system

• Primary system remains read-only and the local system remains read-write

Complete Recovery – restore content from the local DR site to the primary system

• Both primary and replica systems becomes read-only

• Failback to the primary system occurs automatically once synchronization completes

Page 7-16
Replication Activities
Active-Active Failover Scenario

Active-Active Failover Scenario

1. Begin Failover

2. Restore Link

3. Fail Back

© Hitachi Vantara Corporation 2018. All rights reserved.

Begin Failover – clicking Failover begins the failover process

• Local DNS zone on HCP is updated accordingly to route locally

• If DNS Failover is enabled, remote system becomes inaccessible via DNS

Restore Link – ensures that the link definition exists on the remote system

Fail Back – update DNS zone files to route back to the primary system

Page 7-17
Replication Activities
Distributed Authoritative DNS Systems

Distributed Authoritative DNS Systems

…and active-active topologies

 Multiple authoritative DNS Paris, FR London


systems across the network
routing requests to different
HCP systems
myHCP.acme.com
 Reads and writes are always New York
local and fast

 Failover in these configurations Seattle, WA


depends on corporate
infrastructure Beijing, CN Hong Kong
© Hitachi Vantara Corporation 2018. All rights reserved.

• Configuration requirements:

o Multiple authoritative DNS systems across the network routing requests to


different HCP systems participating in an active-active topology

o DNS configuration routes to the appropriate HCPs in the topology based on


which network requests are made on

• In an active-active topology:

o Edits at any location are synced to the others in the topology

o Reads and writes are always local and fast

• Keep in mind the following:

o Failover in these configurations depends on corporate infrastructure

o DNS failover features should be disabled on all systems

Geographically Distributed Data Protection


This section covers information on geographically distributed data protection.

Page 7-18
Replication Activities
What Is Geo Distributed Data Protection?

What Is Geo Distributed Data Protection?

 In HCP, replication is used to protect the objects and make available

 Geographically Distributed Data Protection (called geo-protection) uses


systems configured in cross-system data management in separate
geographic locations

 This cross-system management helps ensure that data is well-protected


against the unavailability or catastrophic failure of a system

 Depending on the number of systems and how you configure the


relationships between them, data may even be protected against the
concurrent unavailability or loss of more than one system

 Geo-protection is implemented by the HCP replication service


© Hitachi Vantara Corporation 2018. All rights reserved.

Geo-Protection Offers Several Benefits

 If a system in a replication topology becomes unavailable, a remote


system can provide continued data availability

 If a system in a replication topology suffers irreparable damage, a


remote system can serve as a source for disaster recovery

 If the content verification or protection service discovers objects it


cannot repair on a system in a replication topology, the service can use
object data and metadata from a remote system to make the needed
repairs

 If an object cannot be read from a system in a replication topology (for


example, because a node is unavailable), HCP can try to read it from a
remote system © Hitachi Vantara Corporation 2018. All rights reserved.

Page 7-19
Replication Activities
Protection Types

Protection Types

 HCP supports two types of geo-protection: whole-object protection and


erasure-coded protection

 With whole-object protection, all the data for each object in a replicated
namespace is maintained on each HCP system in a replication topology

 With erasure-coded protection, the data for each object in a replicated


namespace is subject to erasure coding
• With erasure coding, the data is encoded and broken into multiple chunks
that are then stored across multiple HCP systems
• All but one chunk contains object data
• The other chunk contains parity for the object data
© Hitachi Vantara Corporation 2018. All rights reserved.

Protection Types for Namespaces

 Protection types apply to individual namespaces


• Only cloud-optimized namespaces support erasure-coded protection

 When you create a replication-enabled tenant, you choose between:


• Allowing erasure-coded protection for all cloud-optimized namespaces
owned by the tenant
• Allowing the tenant administrator to allow erasure-coded protection for
selected cloud-optimized namespaces

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 7-20
Replication Activities
Geo-Distributed Erasure Coding Service Processing

Geo-Distributed Erasure Coding Service Processing

 The geo-distributed erasure coding service is responsible for ensuring


that objects that are or have ever been subject to erasure coding are in
the correct state at any given time

 On any given HCP system, the geo-distributed erasure coding service


runs according to a system-specific schedule

 The service on one system can query other HCP systems in the
topology for the state of the objects on those systems even when the
service is not running on those systems

© Hitachi Vantara Corporation 2018. All rights reserved.

Replication Topologies

 In two directions on a single link between two HCP systems


• Active-active replication

 In one direction on a single link between two HCP systems


• Active-passive replication

 From multiple HCP systems to a single HCP system


• Many-to-one replication

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 7-21
Replication Activities
Considerations for Cross-Release Replication

 From one HCP system to a second HCP system and from that second
system to a third HCP system, such that the same HCP tenants and
namespaces and default-namespace directories that are replicated to
the second system are then replicated to the third system
• chained replication

 From one HCP system to multiple other HCP systems


• One-to-many replication

 These configurations can be combined to form complex replication


topologies

© Hitachi Vantara Corporation 2018. All rights reserved.

Considerations for Cross-Release Replication

 HCP release v8.x systems support replication with other release 8.x
systems and with release v7.x systems
• HCP does not support replication between v8.x systems and systems
earlier than v7.0.

 Replication between a v8.x system and a release v7.x system is called


cross-release replication

 Erasure coding topologies cannot include HCP systems at a release


earlier than v8.0

 HCP cannot replicate multipart objects between an v8.x system and a


v7.x system
© Hitachi Vantara Corporation 2018. All rights reserved.

Page 7-22
Replication Activities
Working With Erasure Coding Topologies

Working With Erasure Coding Topologies

 An erasure coding topology provides all the information the replication


service and geo-distributed erasure coding service need to implement
erasure-coded protection
• You can create and manage an erasure coding topology in the HCP System
Management Console from any of the HCP systems included in the topology

 To create an erasure coding topology, you select the HCP systems and
replication links to include in the topology
• You also set the topology properties
• After you create the topology, you add HCP tenants to it
• You can modify the properties of an erasure coding topology at any time
© Hitachi Vantara Corporation 2018. All rights reserved.

Geo-EC Setup

 Before you can create an erasure coding topology, the HCP system must be
connected to at least two other HCP Clusters by active/active replication
links
• An HCP system can participate in only one active erasure coding topology
• There can be a maximum of five erasure coding topologies at any given time,
regardless of state (active, retiring, or retired)

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 7-23
Replication Activities
Steps to Create a Geo-EC Configuration

Steps to Create a Geo-EC Configuration

1. Create 3 to 6 HCP Cluster replication.

2. Enable the Geo-EC service.

3. Select the systems to take part in the topology.

4. Add Tenants.

© Hitachi Vantara Corporation 2018. All rights reserved.

Replication Verification Service


In this section, you will learn about the Replication Verification Service.

Page 7-24
Replication Activities
Replication Verification Service (RVS)

Replication Verification Service (RVS)

DDP Distributed Data Protection


The Protection Service ensures the data in a
cluster is protected by enforcing the Data
DPL DPL
Protection Level (DPL) for your selected
service plan

Hardened Migrations
Customers can use replication to migrate with confidence knowing that RVS
will ensure their migration is successful

© Hitachi Vantara Corporation 2018. All rights reserved.

• Replication Verification Service (RVS) provides Distributed Data Protection (DDP) across
your replication topology. It will ensure that every HCP system that should have a copy
of an object has a copy of the object

• With Replication Verification Service, the customer can confidently use replication for
migration. RVS will make sure all objects are replicated, and in the off chance there are
objects which cannot be replicated, RVS will provide concise reports in the SMC and
Tenant Management Console (TMC)

Page 7-25
Replication Activities
RVS: How Does It Work?

RVS: How Does It Work?

 Verifies every object is on both sides of the namespace


• Uses same parameters for each link
• Skip objects that already replicated
• Replicate objects that do not exist on the other end
• Create a report of all objects

 No need to dump and diff databases anymore

© Hitachi Vantara Corporation 2018. All rights reserved.

So how does RVS work?

• RVS verifies that every object is on both sides of the namespace

• It will use the same parameters as each of the replication link

• For objects that already exist on both sides, it will skip over

• For objects that does not exist on one end, it will replicate over

• In the case that it does not replicate, it will be labeled in a report as non-replicated

Page 7-26
Replication Activities
RVS Setup

RVS Setup

 Check Verify
replicated objects
checkbox to enable
RVS

 You can choose one


Enable RVS
time run or continuous
running Once or cont.

© Hitachi Vantara Corporation 2018. All rights reserved.

As you can see under the Replication > Settings, there is a checkbox labeled Verify
replicated objects

• You can choose to either run once or always run

• If you select run once, it will run one time as soon as you hit the Update button

• If you select Always verify replicated objects, it will create its own schedule and run
constantly

Page 7-27
Replication Activities
RVS Running Status

RVS Running Status

 Check RVS running


status on the
replication link page

Running status

© Hitachi Vantara Corporation 2018. All rights reserved.

On the Replication link page, you can see a Verifying status, it will tell you the last completed
pass, current status, and issues found. The issue found will tell you if there are any objects that
are found by RVS but cannot be replicated onto the other HCP, such as files being opened or
corrupted.

Page 7-28
Replication Activities
RVS Results

RVS Results

TMC
SMC

 Check overall RVS results at SMC > Replication > Overview > Issues found
overlay

 Which objects are not replicated for what reasons on TMC > <namespace> >
Monitoring
© Hitachi Vantara Corporation 2018. All rights reserved.

Lastly, under Issues in the System Management Console, there is a list that tells which tenant
has how many objects not replicated. In addition, under Monitoring in the Tenant
Management Console, it will tell you the object name and reason why it was not replicated.

Load Balancers
This section covers load balancers.

Page 7-29
Replication Activities
Load Balancer Overview

Load Balancer Overview

 Local Traffic Manager - LTM

Server Pool

Load Balancer

Back-end Servers

© Hitachi Vantara Corporation 2018. All rights reserved.

• Discuss how a Load Balancer enables many clients to access a server farm through one
and the same FQDN

• Spreading load to a Server Pool, including all Back-end Servers

• While allowing for single servers to be added or removed

Page 7-30
Replication Activities
Load Balancer With Single HCP

Load Balancer With Single HCP

 Load Balancer monitors HCP nodes for availability - TCP and http(s)

 An unavailable node is discarded from the pool

Apps

Load Balancer

© Hitachi Vantara Corporation 2018. All rights reserved.

• Make sure you monitor the HCP nodes for TCP and https

• Be aware that HCP goes into R/O mode if too many nodes are offline

Page 7-31
Replication Activities
Load Balancer With Pair of Replicated HCP

Load Balancer With Pair of Replicated HCP

 If there is a WAN link HCP 1


between the HCP
Apps
sites, prioritize the
local HCP nodes

 A rule is required that


GAT
takes in account that Load Balancer
an HCP cluster
can enter read-only
mode if too many
nodes are down HCP 2

© Hitachi Vantara Corporation 2018. All rights reserved.

• Make sure you monitor the HCP nodes for TCP and https

• Be aware that HCP goes into R/O mode if too many nodes are offline

• In this case, the Load Balancer needs to take all cluster nodes offline

Page 7-32
Replication Activities
What About Distributed Sites?

What About Distributed Sites?

 Distributed sites, especially with Global Access Topology (GAT)


(active-active replication)
• Higher redundancy, availability, reliability
• Data is made available close to the consumer
• Lower WAN cost, bandwidth, latency

 …at the cost of higher complexity


• How to force access to stay local?
• How to direct traffic to another site if needed?
• How to automate?

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 7-33
Replication Activities
Global Traffic Manager (GTM)

Global Traffic Manager (GTM)

 A Global Traffic Manager improves the performance and availability of


applications by intelligently directing requests to the closest or
best-performing data center

 GTM provides intelligent DNS functionality 

 In a multisite environment with two or more HCP systems using (GAT),


it’s a reasonable addition to Local Traffic Managers at each site

© Hitachi Vantara Corporation 2018. All rights reserved.

• Now, let’s make broaden our view to multi-site environments

• GTM is intelligent DNS

o It’s out of the data path

o It monitors the local resources for availability, like a Local Traffic Managers (LTM)
does

• It can be used with LTMs, too

Page 7-34
Replication Activities
GTM With Replicated HCPs

GTM With Replicated HCPs

n1.tenant.hcp.dom.com ?

Site 1 n1.tenant.hcp.dom.com ? Site 2

Apps Apps
WAN
Stay local !
LTM LTM
Access remote !
Server Pool Server Pool
GTM Stay local !
GTM
GAT

Joined Tenant: tenant.hcp.dom.com

HCP HCP

© Hitachi Vantara Corporation 2018. All rights reserved.

• Make sure you monitor the HCP nodes for TCP and https

• Be aware that HCP goes into R/O mode if too many nodes are offline

• In this case, the Load Balancer needs to take all cluster nodes offline

Global Traffic Manager

 Corp. DNS will forward requests to GTM for all HCPs

 GTM
• To monitor all (!) HCP nodes
• Needs similar rules as LTM
• Answers DNS queries with the best fitting HCPs IP addresses
• May (but does not need to) point to an LTM per site

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 7-35
Replication Activities
Admin Commands

Admin Commands
This section covers information about Admin controls.

Admin Commands Overview

 Existing replication admin commands have been updated to allow for


new link options to be set and displayed
• admin jvm replication create
• admin jvm replication update
• admin jvm replication list

 Namespace admin commands have been updated


• Collision handling and disposition policies added

© Hitachi Vantara Corporation 2018. All rights reserved.

Failover related admin commands have been updated to account for new active-active failover
workflow

• admin jvm replication failover

• admin jvm replication restoreLink

• admin jvm replication startRecovery

• admin jvm replication finishRecovery

Page 7-36
Replication Activities
Admin Commands Reference

Admin Commands Reference

 Use the listThreadInfo command to determine the current state of


replication threads:
• Includes work queue size, success count, failure count, current work being
processed, and any information about the last error encountered
• Can filter output to include information for globals, metadata-first, or region
based data collection and processing

© Hitachi Vantara Corporation 2018. All rights reserved.

# admin jvm replication listThreadInfo

replication listThreadInfo <linkId|linkName> [--python] [--verbose] [--regions] [--globals] [--


metadata]

Displays real time information about all the replication threads and queues for the given link.
Information such as thread queue size, EF being worked on per thread, pauseResumeType of
change operation, thread state, and so on, is displayed. It is helpful to pipe this command to
the "watch" command in order to see how the threads and queues are progressing. Use the
"admin jvm replication list" command to get the linkId that can be used to pass to this
command.

• The --python flag tells this command to print the output in python dictionary format

• The --verbose flag tells this command to print extra detailed information

• The --regions flag tells this command to print region details only

• The --globals flag tells this command to print global details only

• The --metadata flag tells this command to print metadata first details only

Page 7-37
Replication Activities
Admin Commands Reference

 Use the getProgress command to determine the current state of


replication progress:
• Determine which content has been replicated and which content is still
pending replication
• Identify the [n] oldest checkpoints to help identify and triage slow namespace
progress

© Hitachi Vantara Corporation 2018. All rights reserved.

# admin jvm replication getProgress

replication getProgress <linkId|linkName> [--earliest [count] | [--globals | --metadata | --


namespaces <namespaceUUIDs>]]

Return a list of strings describing the region progress checkpoints (in milliseconds since 1970)
for the given link. All objects changed before this time are guaranteed to be replicated.

• With --earliest, returns the earliest progress checkpoint for that link, across all object
types. The optional [count] field may be specified to list the earliest [n] checkpoints.
May be utilized with the --metadata flag to return metadata checkpoints instead

• With --globals, region and metadata checkpoints are excluded

• With --metadata, metadata checkpoints are included and region checkpoints are
excluded

• With --namespaces, only the region checkpoints for the namespaces in the specified.
comma-separated namespaces are listed. If omitted, the checkpoints for all namespaces
are listed. May optionally be used with --metadata

Page 7-38
Replication Activities
System Events

System Events
This section covers system events.

New System Events and Alerts

 Following new events appear in the admin log:


• Collision count in the last 24 hours
• Manual/automatic failover/failback
• Time skew notification between replicating systems

© Hitachi Vantara Corporation 2018. All rights reserved.

The following new alerts have been added to the System Management Console > Overview
page:

• Warning: Time is out of sync between HCP systems on replication link

o Indicates that automatic collision handling may operate incorrectly

o It’s important to keep the times on the two systems synchronized within two
minutes of each other the prevent improper collision handling

Page 7-39
Replication Activities
System Log Events – Reference

System Log Events – Reference

Event ID Event Level Severity Description

2105 cluster Notice Replication link created

2106 cluster Notice Replication link suspended

2107 cluster Notice Replication link resumed

2108 cluster Error Replication link failure

2109 cluster Warning Replication link deleted

2110 cluster Notice Replication link read-only

2111 cluster Notice Replication link authorized

2112 cluster Notice Replication link updated

© Hitachi Vantara Corporation 2018. All rights reserved.

Performance
This section covers performance.

Page 7-40
Replication Activities
Performance Overview

Performance Overview

 Outbound link limit per cluster has increased from two to five
 Data is visible and accessible from remote systems 9x faster for
active-active topologies
 Data is fully protected 42% faster for active-passive topologies
6000
5048 4973
5000
4200 4466
4000

3000

2000 PUTS
1138
1000 974 899 885
GETS
0
Baseline Spray 1-2 BiRepl. WDR BiRepl. BiRepl. © Hitachi Vantara Corporation 2018. All rights reserved.

• Replication throughput scales with additional nodes

• Replication namespaces per link scales with additional nodes

• Moderate performance overhead from each additional outbound link

Page 7-41
Replication Activities
Module Summary

Module Summary

 In this module, you should have learned to:


• Describe active-passive and active-active replication links
• Create replication link and authorize (confirm) the replication link to start the
flow of data
• Monitor the replication process
• Describe Hitachi Content Platform (HCP) failover procedure
• Describe replication verification service operations
• Identify load balancer role
• Discuss admin commands, system events and performance

© Hitachi Vantara Corporation 2018. All rights reserved.

Module Review

1. How many replication links can be created on an HCP system?

2. How are metadata replicated in active-active link?

3. Is replication a licensed feature?

4. What are distance limitations for HCP replication?

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 7-42
8. Support Activities
Module Objectives

 Upon completion of this module, you should be able to:


• Generate and use chargeback reports
• Identify different logs used in Hitachi Content Platform (HCP)
• Download internal logs for Hitachi Support
• Monitor an Content Platform (HCP) system

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 8-1
Support Activities
Chargeback

Chargeback
This section covers information on chargeback.

Chargeback Features

 Chargeback is a metrics collection and reporting mechanism to provide


information about HCP authenticated tenants and namespaces
• User can generate chargeback reports
• Collects data activity and usage metrics (system bandwidth and capacity)
• Reports can be used as input to billing applications
• Collection via 2 interfaces: GUI and MAPI

© Hitachi Vantara Corporation 2018. All rights reserved.

Chargeback logs can be used to monitor namespace usage patterns. They are downloaded from
HCP in .csv format which can be imported into MS Excel table. Chargeback log downloads can
be automated with a tool called Chargeback collector which is basically a script that download
Chargeback logs from HCP using MAPI (Management API) on regular basis.

Page 8-2
Support Activities
Chargeback

Chargeback

IT Infrastructure IT Back office Departments

HCP (100 TB) GET


CSV CSV Engineering
Allocate space HTTP Tool
GET Microsoft Excel
(cURL)
Eng_01 Eng_02 Fin_01 Fin_02
XML Bill
Bill
Engineering (2TB) Finance (5TB) ss
Bills Legal

Leg_01 Leg_02 Leg_03


Billing
Tenant Legal (10TB) System
Finance
Namespace

Features at a Glance Customer Benefits


API, CSV and XML enable billing system integration Amortize system costs across business units
Rollup from namespace to tenant to cluster Business basis for private and public Clouds
Capacity and operation statistics tracked Flexible billing models: capacity, operations or both
© Hitachi Vantara Corporation 2018. All rights reserved.

• GUI — predefined CSV formatted report with summaries

• MAPI — REST API providing full functional data collection

• Log into GUI with System or Tenant User Accounts

o Must have Administrator or Monitor role

o System user

o Tenant user

• Special Consideration

o Data Collection is performed in memory and flushed to disk (internal HCP


disk space) after every five minutes

Page 8-3
Support Activities
Chargeback Metrics

Chargeback Metrics

Column(s) Description
systemName DNS name for HCP Cluster Name for record
tenantName Tenant name for record, if blank, it is a summary line for HCP system
namespaceName Namespace name for record, if blank, it is summary line for either the
tenant or the system
startTime Start time for record, it will typically be the beginning of an hour for the
granularity requested
For example: 2010-08-26T08:00:00-0400
endTime End time for record, it will typically be the end of the hour or the time of
the collection for active bucket (that is time of collection)
For example: 2010-08-26T08:59:59-0400

© Hitachi Vantara Corporation 2018. All rights reserved.

In these reports, Point-in-time values are what the value was at the moment the bucket was
returned (that is, at the end of the latest bucket for the record or instant when active bucket
was collected).

Column(s) Description
objectCount • Point-in-time value of the number of end-user objects in the
system at the end of the data bucket collection
• This value includes both data objects and custom-metadata only
ingestedVolume • Point-in-time value in bytes of the amount of end-user object and
custom-metadata ingested for the tenant/namespace being
reported
• The overhead of directories are not included in this value
storageCapacityUsed Point-in-time value in bytes of the raw storage used to store and
protect end-user data: (# 4KB blocks of user data * 4KB * DPL)
• Smallest allocation size on system is 4KB blocks
• Includes object data and custom-metadata only
• Includes hidden versions of content as well
© Hitachi Vantara Corporation 2018. All rights reserved.

Page 8-4
Support Activities
Chargeback Metrics

Column(s) Description
bytesIn/bytesOut Number of bytes transmitted as part of the HTTP message body into
and out of the HCP system
reads/writes/deletes Count of operations for read, write and delete against the
namespace/tenant being reported
This includes operations against objects, custom-metadata and
directories
deleted Indicates as that data record represents namespace(s) data which
has been deleted but existed during the data collection time frame
valid True/false field that indicates if there was a problem with collecting
data collection stats off of all nodes in the cluster during the period
for the for the specific record

© Hitachi Vantara Corporation 2018. All rights reserved.

bytesIn/bytesOut consists of object data, custom-metadata and directory listings results. Data
in HTTP headers are not counted. For example, Existence Checks, system/object level metadata,
HTTP Response status, and so on. Deleted command includes valid values, such as true, false
and included. The included means summary value from deleted namespaces.

Column(s) Description

multipartObject The total number of bytes of object data in all the parts of multipart
Bytes objects currently stored in the given namespace or in all the
namespaces owned by the tenant

multipartObjects The total number of multipart objects currently stored in the given
Parts namespace or in all the namespaces owned by the tenant
multipartObjects Indicates as that data record represents namespace(s) data which
has been deleted but existed during the data collection time frame
multipartUpload The total number of bytes of object data in all the successfully
Bytes uploaded parts of multipart uploads that are currently in progress in
the given namespace or in all the namespaces owned by the tenant

© Hitachi Vantara Corporation 2018. All rights reserved.

Table entries added for MPU stats – new in v8

Page 8-5
Support Activities
Chargeback Reporting Fundamentals

Column(s) Description
multipartUpload The total number of successfully uploaded parts of multipart
Parts uploads that are currently in progress in the given
namespace or in all the namespaces owned by the tenant
multipartUploads The total number of multipart uploads that are currently in
progress in the given namespace or in all the namespaces
owned by the tenant

© Hitachi Vantara Corporation 2018. All rights reserved.

Table entries added for MPU stats – new in v8

Chargeback Reporting Fundamentals

 Report records are derived from namespace hourly buckets


 Reporting specifications for result set consist of:
• Output format:
 XML – eXtensible Markup Language
 JSON – Java Script Object Notation
 CSV – Comma Separated Values
• Time range specification
 Hourly buckets inclusive of time range requested

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 8-6
Support Activities
System Logs

 Reporting specifications for result set consist of (continued):


• Report granularity
 Hour – each record represents single hour
 Day – each record represents 24 hour period
 Month – each record represents single month

© Hitachi Vantara Corporation 2018. All rights reserved.

System Administrator Reports consist of:

• Metrics for all namespaces for all tenants on this system

o System roll-up of all tenants on system

o Tenant roll-up of all namespaces for all tenants on system

o Detail namespace metrics, if tenant(s) allow system-user management

Tenant Administrator Reports consist of:

• Metrics for all namespaces for the tenant:

o Detail namespace metrics for all namespaces

o Tenant roll-up of all namespaces for this tenant

System Logs
This section covers system logs.

Page 8-7
Support Activities
Types of Logs

Types of Logs

 System events (an audit log)


• The log records system events such as:
 Nodes and services starting; nodes failed
 Changes to the system configuration
 Logical volume failures
 User logins to the HCP System Management Console
 Attempts to log into the System Management Console with an invalid username
 The log size is unlimited

© Hitachi Vantara Corporation 2018. All rights reserved.

 Syslog logging
• HCP sends system log messages to one or more syslog servers
 When you do this, you can use tools in your syslog environment to perform
functions such as sorting the messages, querying for certain events, or forwarding
error messages to a mobile device

• Tenant-level administrators can choose to include tenant log messages along


with the system log messages sent to the syslog servers

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 8-8
Support Activities
Log Management Controls

 Simple Network Management Protocol (SNMP) logging


• HCP can send the System Log to one or more SNMP managers

 Email alerts
• Allow HCP system and tenant level administrators to receive email
notification of HCP health events

 Internal logs
• Record the processing activity of various components of HCP
• Can help HCP support personnel diagnose and resolve the problem, if a
problem with HCP occurs
• Are kept for up to 35 days
© Hitachi Vantara Corporation 2018. All rights reserved.

Log Management Controls

 Marking the internal logs

• Add a comment to the internal logs

 Downloading the internal logs

• Download the internal logs to a file on the Management PC

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 8-9
Support Activities
Download Internal Log

Download Internal Log

 New log download options:


• Users now have the ability to:
 Download only specific categories of logs
 Perform log download using MAPI commands

• Log files are now leaner and less noisy:


 Older samba logs are now rotated
 Low-value log entries have been moved to lower log levels

• Log downloads can now be initiated during online upgrade

© Hitachi Vantara Corporation 2018. All rights reserved.

You can select what logs should be collected on which nodes. It is also possible to specify log
timeframe.

HCP v7.2.1 will include a log Triage tool expected to launch April 1, 2018. While the initial
targeted users are primarily the HCP Sustaining Team, it could be extended to include GSC, QA,
automation, and developers. The main goal of this project is to help reduce manual sustaining
effort while triaging a support issue. The "offline" tool will speed up issue triaging by providing
configurability around extraction, indexing, analysis and visualization of HCP logs. HCP
Sustaining team depends heavily on logs downloaded from an HCP system for post analysis of a
problem that already occurred or for triaging a problem that blocks a certain function of the
system and hence requires quick turn around of the root cause and fix for the problem. The
Triage tool will be built as a Web Application that will provide a simple web based interface for
easy navigation. The Web Application will leverage search technologies from HCI (HCP Content
Intelligence) for actual analysis and visualization of the results.

Page 8-10
Support Activities
Log Download Enhancements

Log Download Enhancements

 Default behavior:
• Consistent with HCP pre-v7.2 behavior
• All log download types are selected
• All HCP nodes are selected

 User may remove unwanted nodes or log types

© Hitachi Vantara Corporation 2018. All rights reserved.

© Hitachi Vantara Corporation 2018. All rights reserved.

• Note that these boxes are greyed out until the preparation for download is complete…

Page 8-11
Support Activities
Log Download Enhancements – MAPI

Log Download Enhancements – MAPI

 Log download via MAPI retains the same set of functionality as


downloading through the UI
• Mark logs
• Prepare logs
• Select log types for download
• Select nodes to download from
• Check log download status
• Cancel log download

© Hitachi Vantara Corporation 2018. All rights reserved.

 Example: checking download status via MAPI


$ curl –k –b hcp-api-auth="..."
"https://admin.myhcp.domain.com:9090/mapi/logs?prettyprint"

<?xml version="1.0" encoding="UTF-8" standalone=yes ?>


<logDownloadStatus>
<readyForStreaming>true</readyForStreaming>
<streamingInProgress>false</streamingInProgress>
<error>false</error>
<started>true</started>
<content>ACCESS,SYSTEM,SERVICE,APPLICATION</content>
</logDownloadStatus>

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 8-12
Support Activities
Log Download Enhancements – MAPI

 Example: download logs from two nodes and 1 S Series node


$ cat params.xml
<?xml version="1.0" encoding="UTF-8" standalone=yes?>
<logDownload>
<nodes>101,104</nodes>
<snodes>S10-22333</snodes>
<content>APPLICATION</content>
</logDownload>

$ curl –O –J –X POST –k –b hcp-api-auth="..." –d params.xml


"https://admin.myhcp.domain.com:9090/mapi/logs/download"

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 8-13
Support Activities
Module Summary

Module Summary

 In this module, you should have learned to:


• Generate and use chargeback reports
• Identify different logs used in Hitachi Content Platform (HCP)
• Download internal logs for Hitachi Support
• Monitor an HCP system

© Hitachi Vantara Corporation 2018. All rights reserved.

Module Review

1. What is the format of Chargeback logs?

2. What is the purpose of internal logs?

3. Are internal logs encrypted?

4. Can I download internal logs only for a specific node?

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 8-14
9. Solutions
Module Objectives

 Upon completion of this module, you should be able to:


• Create a solution for branch offices using Hitachi Data Ingestor (HDI)
• Describe and use Hitachi Content Platform (HCP) Anywhere
• Describe and use Hitachi Content Intelligence
• Identify HCP Integration with Independent Software Vendors (ISVs)
Middleware

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 9-1
Solutions
HCP Solutions and Supported ISVs

HCP Solutions and Supported ISVs

 To open API interfaces, HCP system can be integrated with multiple


products, both hardware and software, offered by Hitachi Vantara and
ISVs (independent software vendor), for example:
• HCP can create a solution with Hitachi Data Ingestor (HDI), including HDI
with Remote Server appliance (EOS), Single Node, VMA, or Cluster
• HCP can create a solution with HDI and third-party NAS devices
• HCP can create a solution with HCP Anywhere
• HCP can create a solution with Hitachi Content Intelligence
• HCP can create a solution with Hitachi Data Instance Manager and ISV
middleware

© Hitachi Vantara Corporation 2018. All rights reserved.

HCP and HDI together are solution for Remote Offices/Branch Offices
HCP system is located in your datacenter (core) and HDI is deployed typically remotely in the
branch office
Advantages of HDI are:
• It migrates all data to HCP
• It works as a cache, when it starts running out of local capacity, it stubs the files and on
read it rehydrates them from the HCP
• It is back up free, it backs all configuration to the HCP automatically
• It can use entire HCP capacity
• It is easy to manage
• It can be used for NAS migrations
HNAS F (Hitachi NAS F) offers the same features as HDI in terms of integration with HCP
HNAS can be also integrated with an HCP system
HCP Anywhere allows you to build your own on-premises cloud enabling your employees to
synchronise their data on BYOD devices. (Bring Your Own Devices)
• HCP can also create a solution with Content Audit Services and Data Archiving powered
by Arkivio
• HCP can also create a solution with Hitachi Content Optimization for Microsoft
SharePoint

Page 9-2
Solutions
HCP Solution With HDI

HCP Solution With HDI

 Operating as an on-ramp for users and applications at the edge, HDI connects to Hitachi
Content Platform (HCP) at a core data center; users work with it like any Network File
System (NFS) or Common Internet File System (CIFS) storage

 HDI is essentially a caching device; it provides users and applications with seemingly
endless storage and a host of newly available capabilities

 For easier and efficient control of distributed IT, HDI comes with a Management API
(MAPI) that enables integration with Hitachi Content Platform’s management
It uses standard
protocols to file
system access
(CIFS and NFS) HTTPS REST API
and Management API

© Hitachi Vantara Corporation 2018. All rights reserved.

HDI is essentially a caching device. it provides users and applications with seemingly endless
storage and a host of newly available capabilities. Furthermore, for easier and efficient control
of distributed IT, Hitachi Data Ingestor comes with a Management API that enables integration
with Hitachi Content Platform’s management UI and other third party or home-grown
management UIs.

Because of Management API at the Data Ingestor, customers can even integrate HDI
management into their homegrown management infrastructures for deployment and ongoing
management.

Page 9-3
Solutions
Elastic and Back Up Free

Elastic and Back Up Free

WEB APPS
AND DATA

Remote
Corporate Edge
Content Core Storage

© Hitachi Vantara Corporation 2018. All rights reserved.

Elastic, back-up free branch sites:

• Small footprint

• Expand and contract as needed

• No need for local back up or IT staff

• Easy to set up, manage and adapt

• Works with existing applications

• Stores relevant data locally; links the remaining data to the content core

Page 9-4
Solutions
Available HDI Configurations

Available HDI Configurations

Four choices of Hitachi Data Ingestor

1. HDI Cluster 2. HDI Single Node 3. HDI Virtual 4. Remote Server


Machine (VMA)

 Non-redundant  Non-redundant
 Highly available configuration  Non-redundant
configuration
cluster pair  Internal storage configuration
 Internal Storage
 SAN-attached to (RAID-6  Customer-defined
hardware and  Configured
Hitachi storage configuration)
storage through HCP
 Supports HUS, HUS Anywhere
VM, VSP, VSP configuration
 Also Hyper-V  Note - EOS
G1000

© Hitachi Vantara Corporation 2018. All rights reserved.

The GUI will change depending the type of HDI:

• HDI Cluster will be managed using Hitachi File Service Manager (HFSM)

• HDI Single Node and VMWare format will be managed using the Integrated
Management GUI

• Remote Server will be managed through HCP Anywhere

HUS = Hitachi Unified Storage

VSP = Hitachi Virtual Storage Platform

Page 9-5
Solutions
HDI Maps to HCP Tenants and Namespaces

HDI Maps to HCP Tenants and Namespaces

 Clients write to assigned file systems


 Each file system is mapped to its designated namespace
 Each namespace can be shared by multiple HDIs for read-only
Client Client Client Client Client Client

Branch A Branch B Branch C

FS 1 FS 2 FS 1 FS 2 FS 1 FS 2

HDI HDI HDI


RO

Tenant A Tenant B Tenant C


Namespace 1 Namespace 2 Namespace 1 Namespace 2 Namespace 1 Namespace 2

Hitachi Content Platform


© Hitachi Vantara Corporation 2018. All rights reserved.

Benefits
• Satisfy multiple applications, varying SLAs and workload types or organizations

• Determine utilization and chargeback per customer

• Edge dispersion: each HDI can access another when set up that way

• Enable advanced features at one branch or at more granular level

• Examples: replication, encryption, DPL levels (how many copies to keep), compliance
and retention, compression and versioning

Page 9-6
Solutions
Single HCP Tenant Solution for Cloud

Single HCP Tenant Solution for Cloud

 Supports multiple HDIs on a single HCP tenant, a Company a Tenant


 Each HDI file system has an independent namespace for writing
 The HCP namespace for storing the system backup is shared by all HDIs

Client Client Client Client Client

Branch A Branch B Branch C

FS 1 FS 2 FS 1 FS 2 FS 1

HDI HDI HDI

Data Data Data Data Data


Namespace 1 Namespace 2 Namespace 3 Namespace 4 Namespace 5
System
Tenant A Backup
Namespace
Hitachi Content Platform
© Hitachi Vantara Corporation 2018. All rights reserved.

This configuration of multiple HDI sharing one tenant can be used in cloud situations.

A tenant represents a customer corporation. All HDIs of this corporation share the same tenant.

Page 9-7
Solutions
File System Migration Task

File System Migration Task

 Application writes a file to HDI


 HDI copies the file to HCP (with the schedule of the migration task) but not deletes the file
(HDI keeps a copy in local)
 When the system capacity reaches 90%, HDI deletes the files in excess of the threshold
and creates 4KB or 8KB links “stubs” to replace them
• Users access the files as they always had since links are transparent to clients

• Depends on the path length on the file and the number of ACEs on the file, the size of stub file
could be either of 4KB or 8KB

CIFS/NFS REST/HTTP(S)
HDI HCP
Application WRITE REPLICATE
READ RECALL

© Hitachi Vantara Corporation 2018. All rights reserved.

• REST = Representational State Transfer (REST) is a standard

• The devices communicate using the same HTTP verbs (GET, POST, PUT, DELETE and so
on) through HTTP or HTTPS

• It is the optimal protocol to access the HCP

• Reading a link recalls the file back into HDI

o Recalled files are deleted from HDI later and replaced by another link, based on
HDI system capacity

Page 9-8
Solutions
Stubs – File Restoration

Stubs – File Restoration

 When the system capacity reaches 90%, HDI deletes the files in excess of the
threshold using a LRU algorithm and creates stubs to replace them

 If a user or application retrieves deleted files, HDI recovers the file data using
the Stub Metadata, performing a restore operation from the HCP namespace to
the HDI file system

HDI file system HCP namespace

Stub
Metadata

Restore File Data

© Hitachi Vantara Corporation 2018. All rights reserved.

Benefits:

Stub stores only the information required to restore user data quickly and
enables to save space in the HDI file system to cache the more accessed
files.

Page 9-9
Solutions
Hitachi NAS (HNAS) Data Migration to HCP

Hitachi NAS (HNAS) Data Migration to HCP

 HNAS can tier to HCP using Hitachi NAS Platform (HNAS) Data Migrator
Data Migration

Local File System pointer NFS pointer HTTP pointer


[Handle for local file system] [Remote server identifier] [Path/URL for HTTP]

CVL CVL-2 (XVL) CVL-2 (XVL)

RO – WORM

© Hitachi Vantara Corporation 2018. All rights reserved.

CVL – cross volume link = stub, pointer; XVL – external cross volume link = stub, pointer
pointing outside HNAS; RO-WORM – Read Only – Write Once, Read Many
The 3 types of data migration targets are:

• Data migration (internal)

o 2 file systems associated with the same EVS

• Data migration (external)

o NFS targets

• Data migration (external)

o Hitachi Content Platform (HCP) and Atempo Digital Archiving (ADA) using HTTP

• Data migration (external)

o Hitachi Content Platform (HCP) and Atempo Digital Archiving (ADA) using HTTP

• Migration to HCP:

o On HNAS, an external path to http target (HCP) must be added using CLI

o Once the path is created, it is possible to set up HNAS Data Migrator rules
(policies)

o Data must be accessed only through HNAS

Page 9-10
Solutions
HNAS Data Migrator to Cloud (DM2C)

HNAS Data Migrator to Cloud (DM2C)

Data Migrator To
Data Migration
Cloud
Local File System NFS pointer HTTP pointer HTTPS pointer
pointer [Remote server [Path/URL for [Path/URL for
[Handle for local FS] identifier] HTTP] HTTPS]

RW RW RO - WORM RW

CVL CVL-2 (XVL) CVL-2 (XVL) CVL-2 (XVL)

Note: Pointers, also known as stubs


© Hitachi Vantara Corporation 2018. All rights reserved.

Data migrator to cloud (DM2Cloud)

• This target class is HTTPS based URL cloud offering or services

• Before v12.3, the DM2Cloud path was pathing through the Linux MMB package

• From v12.3 and up, the aggregates on the FPGA board are used

• Data Migrator to Cloud uses S3 API

• The target can be any public cloud service, but also Hitachi Content Platform (HCP) G10
or an HCP S30 node

• Data is available for read and write (focus is not on tiering and retention, but on a way
how to expand HNAS capacity in a inexpensive way)

Page 9-11
Solutions
HCP Solution With HCP Anywhere

HCP Solution With HCP Anywhere

 HCP Anywhere provides 2 major features:


• File synchronization and sharing

• HDI Single Node, VMware vSphere Management Assistant (vMA), or rack


scale (RS) device management

© Hitachi Vantara Corporation 2018. All rights reserved.

An HCP Anywhere system consists of both hardware and software and uses Hitachi Content
Platform (HCP) to store data.

It provides 2 major features:

• File synchronization and sharing

o This feature allows users to add files to HCP Anywhere and access those files
from nearly any location

o When users add files to HCP Anywhere, HCP Anywhere stores the files in an HCP
system and makes the files available through the user's computers, smartphones
and tablets

o Users can also share links to files that they have added to HCP Anywhere

• HDI device management

o This feature allows an administrator to remotely configure and monitor HDI


devices that have been deployed at multiple remote sites throughout an
enterprise

Page 9-12
Solutions
HCP Anywhere Architecture

HCP Anywhere Architecture

Browsers Mobiles HDI LES Desktops


Internet /

Internal network
Intranet

HCP Anywhere - Public or private networks Clients


HTTPS
DMZ

Load Balancers
HTTP(S)
Active Directory Server
Web Servers REST API Web Servers REST API
AW App Network

Notification Notification
Sync Server DB DB Sync Server
Server Server
Application and DB Server Application and DB Server Other customer IT
Infrastructure: DNS, NTP,
Virus scanning and so on
Replication (back-end network)
HCP Anywhere POD Enterprise IT
© Hitachi Vantara Corporation 2018. All rights reserved.

HCP Anywhere is sync and share gateway for HCP. It connects to HCP using HTTP protocol on
the back end and on the front end, it provides secure applications for mobile devices such as
smartphones and tablets. Desktop applications are available too as well as Web based GUI.
Multiple platforms are supported. Client applications can be branded for a specific customer.
HCP Anywhere solution consists of two servers. These servers can be either Quanta servers or
virtual machines running in VMware.

Hitachi Content Intelligence


This section covers information on Hitachi Content Intelligence.

Page 9-13
Solutions
A Solution to the Data Delimma

A Solution to the Data Delimma

HITACHI CONTENT INTELLIGENCE


a solution that transforms data into valuable
and relevant business information

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 9-14
Solutions
Content Intelligence Does Three Specific Things

Content Intelligence Does Three Specific Things

1 2 3
Connect Understand Recommend
Different data types Extract all or parts Review and assess
Different data sources Evaluate data value Visualize relationships
Enrich and augment Explore data
Transform and index Discover opportunities
Decisions based on data
© Hitachi Vantara Corporation 2018. All rights reserved.

Regardless of how you use it, Hitachi Content Intelligence makes data-driven decision making
as easy as 1, 2, 3.

In the slide above, you can see just how important it is to make the shift from simply
connecting data to instead focus on how the business can more effectively and efficiently
search for what they need, understand the relationships between different data sets in an effort
to surface valuable insights that they can act on.

With today’s structured and unstructured data being viewed from multiple perspectives, finding
new and unexpected patterns are what will help your business find the new solutions to the
complex problems, market dynamics, new opportunity identification, and more.

Page 9-15
Solutions
Data Connections: Connecting the Dots

Data Connections: Connecting the Dots

The right data, at the right time

Focuses on critical connections

Aggregates from multiple sources

Supports multiple data types

Distributes it across multiple outputs

To Construct a Centralized Data Hub


© Hitachi Vantara Corporation 2018. All rights reserved.

Understanding Transforming Data

Smooth data to remove noise

Replace low-level data with


higher-level concepts

Normalize data to fall within a


smaller range of classification

Aggregate multiple data types and


sources into one location
Construct and insert new attributes

To Turn Data Into Useful Information

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 9-16
Solutions
Recommend Enabling Data-Driven Decisions

Recommend Enabling Data-Driven Decisions

Transactional Behavioral Edge


Data Data Data

(Financial trades, log (Geospatial, mobile (Industrial IoT, device


data, shopping data) data) data, sensors, signals)

Fraud Detection, Data Personalization and Edge Analytics and


Exploration and Regulatory Recommendation Operational Intelligence
Investigation
© Hitachi Vantara Corporation 2018. All rights reserved.

Access Two Easy-to-Use Interfaces

Workflow Designer & Administration Hitachi Content Search

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 9-17
Solutions
Workflow Designer Transforms and Enriches Data

Workflow Designer Transforms and Enriches Data

 Wizard driven experience

 Drag and drop conditional pipeline builder

 Create and manage multiple workflows

 Text and meta data management stages


• Analyze, extract and transform
• Store, filter and enrich
• Content classes
• Conditional and custom plugin-API

 Reporting and metrics


© Hitachi Vantara Corporation 2018. All rights reserved.

Hitachi Content Intelligence: Workflows

Workflow Pipeline
Un- EXTRACT
structured Stage Stage Stage
and Index
Data Collection
INDEX

Structured
Data  Workflows: inputs + pipeline + index
 Data Connections: Sources that are crawled

Semi-  Processing Pipelines: Extract, transform, enrich data


Structured
Data
 Index Collection: Where results are stored
 Content Classes: Used to extract bits of data as fields
© Hitachi Vantara Corporation 2018. All rights reserved.

Page 9-18
Solutions
Admin Interface Manages the System

Admin Interface Manages the System

 Security configuration and management


• LDAP
• Active Directory
• Certificates
 User roles and group management
 Managed extensibility
• Custom connectors
• Custom pipeline stages
• Custom indexers
 System and infrastructure monitoring

© Hitachi Vantara Corporation 2018. All rights reserved.

Content Search Enables the End-User

 Deliver results that are


contextually relevant
• Relevancy boosting
• Query settings
• Results formatting

 Compare results

 Customizable views

 Filtering, Facets and Type-


ahead

© Hitachi Vantara Corporation 2018. All rights reserved.

Auto training mode: System can be trained to understand unstructured data with well-known
format and custom formats.

Page 9-19
Solutions
Highly Scalable With Deployment Flexibility

Highly Scalable With Deployment Flexibility

Micro- Micro- Micro- Micro- Micro- Micro- Micro- Micro- Micro-


Service Service Service Service Service Service Service Service Service

Hitachi
Content Micro-
Service
Micro-
Service
Micro-
Service
Micro-
Service
Micro-
Service
Micro-
Service
Micro-
Service
Micro-
Service
Micro-
Service

Intelligence
Micro- Micro- Micro- Micro- Micro- Micro- Micro- Micro- Micro-
Service Service Service Service Service Service Service Service Service

Host OS 64-bit Linux 64-bit Linux 64-bit Linux


With Docker

Deployment
Mediums
On Physical Servers In Virtual Machines In the Cloud

© Hitachi Vantara Corporation 2018. All rights reserved.

A Toolset to Enable Data-Specific User Experiences

Custom Custom Partner Partner


Hitachi Content App App App App
Search
HDDS Replacement
Custom Integrations Partner Integrations

Hitachi Content Intelligence


Highly Available Cluster of Micro-Service Instances

Host OS
With Docker

Deployment Mediums
Physical, Virtual, Cloud-Hosted

Hitachi Data Discovery Suite (HDDS) © Hitachi Vantara Corporation 2018. All rights reserved.

Other Solutions
This section covers other solutions.

Page 9-20
Solutions
HCP Integration With ISV Middleware

HCP Integration With ISV Middleware

Content Producing No retention, user can


Application Example Applications Delete anytime
 Content producing applications Email
• Microsoft Exchange, Lotus Notes Pointer
Pointer
• Files (from various applications)
 Data movement middleware Attachments
applications ISVs:
• Identifies individual emails or files ISVs
Metadata
• Creates metadata
• Email: to, from, cc, bcc, header
• Files: name, size, created date
• Sets policies HCP Retention setting applied,
• Moves files cannot be deleted by user

© Hitachi Vantara Corporation 2018. All rights reserved.

List of ISV Partners

© Hitachi Vantara Corporation 2018. All rights reserved.

• Supports multiple applications and content types

• Embedded full-text indexing and search

• High-performance, scalable, and secure storage

Page 9-21
Solutions
Software Partners Complete the Solution (100+ Partners)

Software Partners Complete the Solution (100+ Partners)

ECM/ERM
Email File

Health care Database/ERP

Mainframe
Security/Logging/CDR
Voice Logging

© Hitachi Vantara Corporation 2018. All rights reserved.

Module Summary

 In this module, you should have learned to:


• Create a solution for branch offices using Hitachi Data Ingestor (HDI)
• Describe and use Hitachi Content Platform (HCP) Anywhere
• Identify the components of Hitachi Content Intelligence
• Identify HCP Integration with Independent Software Vendors (ISVs)
Middleware

© Hitachi Vantara Corporation 2018. All rights reserved.

Page 9-22
Solutions
Module Review

Module Review

1. What solution supports ROBO deployments?

2. How many HDI configurations there are?

3. What hardware is required to deploy HCP-AW?

4. What targets are supported by StorFirst Apollo?

© Hitachi Vantara Corporation 2018. All rights reserved.

Your Next Steps

Validate your knowledge and skills with certification.


Follow us on social media:

@HitachiVantara
Check your progress in the Learning Path.

Review the course description for supplemental courses, or Hitachi University / Hitachi
Vantara Learning Center
register, enroll and view additional course offerings.

Get practical advice and insight with Hitachi Vantara white papers.

Hitachi Vantara
Join the conversation with your peers in the Hitachi Vantara Community.

© Hitachi Vantara Corporation 2018. All rights reserved.

Certification: https://www.hitachivantara.com/en-us/services/training-
certification.html#certification

Page 9-23
Solutions
Your Next Steps

Learning Paths:

• Customer Learning Path (North America, Latin America, and APAC):


http://www.hitachivantara.com/assets/pdf/hitachi-data-systems-academy-customer-
learning-paths.pdf

• Customer Learning Path (EMEA): http://www.hitachivantara.com/assets/pdf/hitachi-


data-systems-academy-customer-training.pdf

• All Partners Learning Paths: https://partner.hitachivantara.com/

• Employee Learning Paths: https://community.hitachivantara.com/community/help-and-


feedback

For Employees, go to Hitachi University: https://hitachi.csod.com/client/hitachi/default.aspx

For Customers or Partners, go to Hitachi Vantara Learning Center:


https://hitachi.csod.com/client/hitachi/default.aspx

• White Papers: http://www.hitachivantara.com/corporate/resources/

• For Customers, Partners, Employees – Hitachi Vantara Community:

• https://community.hitachivantara.com/welcome

• For Customers, Partners, Employees –Link to Hitachi Vantara Twitter:


http://www.twitter.com/HitachiVantara

Page 9-24
Training Course Glossary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

—A— AIX — IBM UNIX.


AaaS — Archive as a Service. A cloud computing AL — Arbitrated Loop. A network in which nodes
business model. contend to send data, and only 1 node at a
AAMux — Active-Active Multiplexer. time is able to send data.

ACC — Action Code. A SIM (System Information AL-PA — Arbitrated Loop Physical Address.
Message). AMS — Adaptable Modular Storage.
ACE — Access Control Entry. Stores access rights APAR — Authorized Program Analysis Reports.
for a single user or group within the APF — Authorized Program Facility. In IBM z/OS
Windows security model. and OS/390 environments, a facility that
ACL — Access Control List. Stores a set of ACEs, permits the identification of programs that
so that it describes the complete set of access are authorized to use restricted functions.
rights for a file system object within the API — Application Programming Interface.
Microsoft Windows security model.
APID — Application Identification. An ID to
ACP ― Array Control Processor. Microprocessor identify a command device.
mounted on the disk adapter circuit board
(DKA) that controls the drives in a specific Application Management — The processes that
disk array. Considered part of the back end; manage the capacity and performance of
it controls data transfer between cache and applications.
the hard drives. ARB — Arbitration or request.
ACP Domain ― Also Array Domain. All of the ARM — Automated Restart Manager.
array-groups controlled by the same pair of Array Domain — Also ACP Domain. All
DKA boards, or the HDDs managed by 1 functions, paths, and disk drives controlled
ACP PAIR (also called BED). by a single ACP pair. An array domain can
ACP PAIR ― Physical disk access control logic. contain a variety of LVI or LU
Each ACP consists of 2 DKA PCBs to configurations.
provide 8 loop paths to the real HDDs. Array Group — Also called a parity group. A
Actuator (arm) — Read/write heads are attached group of hard disk drives (HDDs) that form
to a single head actuator, or actuator arm, the basic unit of storage in a subsystem. All
that moves the heads around the platters. HDDs in a parity group must have the same
AD — Active Directory. physical capacity.

ADC — Accelerated Data Copy. Array Unit — A group of hard disk drives in 1
RAID structure. Same as parity group.
Address — A location of data, usually in main
memory or on a disk. A name or token that ASIC — Application specific integrated circuit.
identifies a network component. In local area ASSY — Assembly.
networks (LANs), for example, every node Asymmetric virtualization — See Out-of-band
has a unique address. virtualization.
ADP — Adapter. Asynchronous — An I/O operation whose
ADS — Active Directory Service. initiator does not await its completion before

Page G-1
proceeding with other work. Asynchronous or Yottabyte (YB). Note that variations of
I/O operations enable an initiator to have this term are subject to proprietary
multiple concurrent I/O operations in trademark disputes in multiple countries at
progress. Also called Out-of-band the present time.
virtualization. BIOS — Basic Input/Output System. A chip
ATA —Advanced Technology Attachment. A disk located on all computer motherboards that
drive implementation that integrates the governs how a system boots and operates.
controller on the disk drive itself. Also BLKSIZE — Block size.
known as IDE (Integrated Drive Electronics)
Advanced Technology Attachment. BLOB — Binary Large OBject.

ATR — Autonomic Technology Refresh. BP — Business processing.

Authentication — The process of identifying an BPaaS —Business Process as a Service. A cloud


individual, usually based on a username and computing business model.
password. BPAM — Basic Partitioned Access Method.
AUX — Auxiliary Storage Manager. BPM — Business Process Management.
Availability — Consistent direct access to BPO — Business Process Outsourcing. Dynamic
information over time. BPO services refer to the management of
-back to top- partly standardized business processes,
including human resources delivered in a
—B— pay-per-use billing relationship or a self-
B4 — A group of 4 HDU boxes that are used to service consumption model.
contain 128 HDDs. BST — Binary Search Tree.
BA — Business analyst. BSTP — Blade Server Test Program.
Back end — In client/server applications, the BTU — British Thermal Unit.
client part of the program is often called the
Business Continuity Plan — Describes how an
front end and the server part is called the
organization will resume partially or
back end.
completely interrupted critical functions
Backup image—Data saved during an archive within a predetermined time after a
operation. It includes all the associated files, disruption or a disaster. Sometimes also
directories, and catalog information of the called a Disaster Recovery Plan.
backup operation.
-back to top-
BADM — Basic Direct Access Method.
BASM — Basic Sequential Access Method.
—C—
CA — (1) Continuous Access software (see
BATCTR — Battery Control PCB.
HORC), (2) Continuous Availability or (3)
BC — (1) Business Class (in contrast with EC, Computer Associates.
Enterprise Class). (2) Business coordinator.
Cache — Cache Memory. Intermediate buffer
BCP — Base Control Program. between the channels and drives. It is
BCPii — Base Control Program internal interface. generally available and controlled as two
BDW — Block Descriptor Word. areas of cache (cache A and cache B). It may
be battery-backed.
BED — Back end director. Controls the paths to
the HDDs. Cache hit rate — When data is found in the cache,
it is called a cache hit, and the effectiveness
Big Data — Refers to data that becomes so large in of a cache is judged by its hit rate.
size or quantity that a dataset becomes
awkward to work with using traditional Cache partitioning — Storage management
database management systems. Big data software that allows the virtual partitioning
entails data capacity or measurement that of cache and allocation of it to different
requires terms such as Terabyte (TB), applications.
Petabyte (PB), Exabyte (EB), Zettabyte (ZB) CAD — Computer-Aided Design.

Page G-2
CAGR — Compound Annual Growth Rate. Centralized management — Storage data
Capacity — Capacity is the amount of data that a management, capacity management, access
storage system or drive can store after security management, and path
configuration and/or formatting. management functions accomplished by
software.
Most data storage companies, including HDS,
calculate capacity based on the premise that CF — Coupling Facility.
1KB = 1,024 bytes, 1MB = 1,024 kilobytes, CFCC — Coupling Facility Control Code.
1GB = 1,024 megabytes, and 1TB = 1,024 CFW — Cache Fast Write.
gigabytes. See also Terabyte (TB), Petabyte
(PB), Exabyte (EB), Zettabyte (ZB) and CH — Channel.
Yottabyte (YB). CH S — Channel SCSI.
CAPEX — Capital expenditure — the cost of CHA — Channel Adapter. Provides the channel
developing or providing non-consumable interface control functions and internal cache
parts for the product or system. For example, data transfer functions. It is used to convert
the purchase of a photocopier is the CAPEX, the data format between CKD and FBA. The
and the annual paper and toner cost is the CHA contains an internal processor and 128
OPEX. (See OPEX). bytes of edit buffer memory. Replaced by
CAS — (1) Column Address Strobe. A signal sent CHB in some cases.
to a dynamic random access memory CHA/DKA — Channel Adapter/Disk Adapter.
(DRAM) that tells it that an associated CHAP — Challenge-Handshake Authentication
address is a column address. CAS-column Protocol.
address strobe sent by the processor to a
CHB — Channel Board. Updated DKA for Hitachi
DRAM circuit to activate a column address.
Unified Storage VM and additional
(2) Content-addressable Storage.
enterprise components.
CBI — Cloud-based Integration. Provisioning of a
Chargeback — A cloud computing term that refers
standardized middleware platform in the
to the ability to report on capacity and
cloud that can be used for various cloud
utilization by application or dataset,
integration scenarios.
charging business users or departments
An example would be the integration of based on how much they use.
legacy applications into the cloud or CHF — Channel Fibre.
integration of different cloud-based
applications into one application. CHIP — Client-Host Interface Processor.
Microprocessors on the CHA boards that
CBU — Capacity Backup. process the channel commands from the
CBX —Controller chassis (box). hosts and manage host access to cache.
CCHH — Common designation for Cylinder and CHK — Check.
Head. CHN — Channel adapter NAS.
CCI — Command Control Interface. CHP — Channel Processor or Channel Path.
CCIF — Cloud Computing Interoperability CHPID — Channel Path Identifier.
Forum. A standards organization active in CHSN or C-HSN— Cache Memory Hierarchical
cloud computing. Star Network.
CDP — Continuous Data Protection. CHT — Channel tachyon. A Fibre Channel
CDR — Clinical Data Repository protocol controller.
CDWP — Cumulative disk write throughput. CICS — Customer Information Control System.
CE — Customer Engineer. CIFS protocol — Common internet file system is a
platform-independent file sharing system. A
CEC — Central Electronics Complex. network file system accesses protocol
CentOS — Community Enterprise Operating primarily used by Windows clients to
System. communicate file access requests to
Windows servers.

Page G-3
CIM — Common Information Model. • Data discoverability
CIS — Clinical Information System. • Data mobility
CKD ― Count-key Data. A format for encoding • Data protection
data on hard disk drives; typically used in • Dynamic provisioning
the mainframe environment.
• Location independence
CKPT — Check Point.
• Multitenancy to ensure secure privacy
CL — See Cluster.
• Virtualization
CLI — Command Line Interface.
Cloud Fundamental —A core requirement to the
CLPR — Cache Logical Partition. Cache can be deployment of cloud computing. Cloud
divided into multiple virtual cache fundamentals include:
memories to lessen I/O contention.
• Self service
Cloud Computing — “Cloud computing refers to
• Pay per use
applications and services that run on a
distributed network using virtualized • Dynamic scale up and scale down
resources and accessed by common Internet Cloud Security Alliance — A standards
protocols and networking standards. It is organization active in cloud computing.
distinguished by the notion that resources are
CLPR — Cache Logical Partition.
virtual and limitless, and that details of the
physical systems on which software runs are Cluster — A collection of computers that are
abstracted from the user.” — Source: Cloud interconnected (typically at high-speeds) for
Computing Bible, Barrie Sosinsky (2011) the purpose of improving reliability,
availability, serviceability or performance
Cloud computing often entails an “as a
(via load balancing). Often, clustered
service” business model that may entail one
computers have access to a common pool of
or more of the following:
storage and run special software to
• Archive as a Service (AaaS) coordinate the component computers'
• Business Process as a Service (BPaas) activities.
• Failure as a Service (FaaS) CM ― Cache Memory, Cache Memory Module.
• Infrastructure as a Service (IaaS) Intermediate buffer between the channels
and drives. It has a maximum of 64GB (32GB
• IT as a Service (ITaaS)
x 2 areas) of capacity. It is available and
• Platform as a Service (PaaS) controlled as 2 areas of cache (cache A and
• Private File Tiering as a Service (PFTaas) cache B). It is fully battery-backed (48 hours).
• Software as a Service (Saas) CM DIR — Cache Memory Directory.
• SharePoint as a Service (SPaas) CME — Communications Media and
• SPI refers to the Software, Platform and Entertainment.
Infrastructure as a Service business model. CM-HSN — Control Memory Hierarchical Star
Cloud network types include the following: Network.
• Community cloud (or community CM PATH ― Cache Memory Access Path. Access
network cloud) Path from the processors of CHA, DKA PCB
to Cache Memory.
• Hybrid cloud (or hybrid network cloud)
CM PK — Cache Memory Package.
• Private cloud (or private network cloud)
• Public cloud (or public network cloud) CM/SM — Cache Memory/Shared Memory.

• Virtual private cloud (or virtual private CMA — Cache Memory Adapter.
network cloud) CMD — Command.
Cloud Enabler —a concept, product or solution CMG — Cache Memory Group.
that enables the deployment of cloud CNAME — Canonical NAME.
computing. Key cloud enablers include:

Page G-4
CNS — Cluster Name Space or Clustered Name CSTOR — Central Storage or Processor Main
Space. Memory.
CNT — Cumulative network throughput. C-Suite — The C-suite is considered the most
CoD — Capacity on Demand. important and influential group of
individuals at a company. Referred to as
Community Network Cloud — Infrastructure “the C-Suite within a Healthcare provider.”
shared between several organizations or
CSV — Comma Separated Value or Cluster Shared
groups with common concerns.
Volume.
Concatenation — A logical joining of 2 series of
CSVP — Customer-specific Value Proposition.
data, usually represented by the symbol “|”.
In data communications, 2 or more data are CSW ― Cache Switch PCB. The cache switch
often concatenated to provide a unique (CSW) connects the channel adapter or disk
name or reference (e.g., S_ID | X_ID). adapter to the cache. Each of them is
Volume managers concatenate disk address connected to the cache by the Cache Memory
spaces to present a single larger address Hierarchical Star Net (C-HSN) method. Each
space. cluster is provided with the 2 CSWs, and
Connectivity technology — A program or device's each CSW can connect 4 caches. The CSW
ability to link with other programs and switches any of the cache paths to which the
devices. Connectivity technology allows channel adapter or disk adapter is to be
programs on a given computer to run connected through arbitration.
routines or access objects on another remote CTG — Consistency Group.
computer. CTL — Controller module.
Controller — A device that controls the transfer of CTN — Coordinated Timing Network.
data from a computer to a peripheral device
(including a storage system) and vice versa. CU — Control Unit (refers to a storage subsystem.
The hexadecimal number to which 256
Controller-based virtualization — Driven by the
LDEVs may be assigned).
physical controller at the hardware
microcode level versus at the application CUDG — Control Unit Diagnostics. Internal
software layer and integrates into the system tests.
infrastructure to allow virtualization across CUoD — Capacity Upgrade on Demand.
heterogeneous storage and third party
CV — Custom Volume.
products.
CVS ― Customizable Volume Size. Software used
Corporate governance — Organizational
to create custom volume sizes. Marketed
compliance with government-mandated
under the name Virtual LVI (VLVI) and
regulations.
Virtual LUN (VLUN).
CP — Central Processor (also called Processing
Unit or PU). CWDM — Course Wavelength Division
Multiplexing.
CPC — Central Processor Complex.
CXRC — Coupled z/OS Global Mirror.
CPM — Cache Partition Manager. Allows for
-back to top-
partitioning of the cache and assigns a
partition to a LU; this enables tuning of the —D—
system’s performance. DA — Device Adapter.
CPOE — Computerized Physician Order Entry
DACL — Discretionary access control list (ACL).
(Provider Ordered Entry).
The part of a security descriptor that stores
CPS — Cache Port Slave. access rights for users and groups.
CPU — Central Processing Unit. DAD — Device Address Domain. Indicates a site
CRM — Customer Relationship Management. of the same device number automation
CSS — Channel Subsystem. support function. If several hosts on the
same site have the same device number
CS&S — Customer Service and Support.
system, they have the same name.

Page G-5
DAP — Data Access Path. Also known as Zero virtual disk data addresses are mapped to
Copy Failover (ZCF). sequences of member disk addresses in a
DAS — Direct Attached Storage. regular rotating pattern.

DASD — Direct Access Storage Device. Data Transfer Rate (DTR) — The speed at which
data can be transferred. Measured in
Data block — A fixed-size unit of data that is kilobytes per second for a CD-ROM drive, in
transferred together. For example, the bits per second for a modem, and in
X-modem protocol transfers blocks of 128 megabytes per second for a hard drive. Also,
bytes. In general, the larger the block size, often called data rate.
the faster the data transfer rate.
DBL — Drive box.
Data Duplication — Software duplicates data, as
in remote copy or PiT snapshots. Maintains 2 DBMS — Data Base Management System.
copies of data. DBX — Drive box.
Data Integrity — Assurance that information will DCA ― Data Cache Adapter.
be protected from modification and DCTL — Direct coupled transistor logic.
corruption.
DDL — Database Definition Language.
Data Lifecycle Management — An approach to
information and storage management. The DDM — Disk Drive Module.
policies, processes, practices, services and DDNS — Dynamic DNS.
tools used to align the business value of data DDR3 — Double data rate 3.
with the most appropriate and cost-effective
storage infrastructure from the time data is DE — Data Exchange Software.
created through its final disposition. Data is Device Management — Processes that configure
aligned with business requirements through and manage storage systems.
management policies and service levels DFS — Microsoft Distributed File System.
associated with performance, availability,
DFSMS — Data Facility Storage Management
recoverability, cost, and what ever
Subsystem.
parameters the organization defines as
critical to its operations. DFSM SDM — Data Facility Storage Management
Subsystem System Data Mover.
Data Migration — The process of moving data
from 1 storage device to another. In this DFSMSdfp — Data Facility Storage Management
context, data migration is the same as Subsystem Data Facility Product.
Hierarchical Storage Management (HSM). DFSMSdss — Data Facility Storage Management
Data Pipe or Data Stream — The connection set up Subsystem Data Set Services.
between the MediaAgent, source or DFSMShsm — Data Facility Storage Management
destination server is called a Data Pipe or Subsystem Hierarchical Storage Manager.
more commonly a Data Stream.
DFSMSrmm — Data Facility Storage Management
Data Pool — A volume containing differential Subsystem Removable Media Manager.
data only.
DFSMStvs — Data Facility Storage Management
Data Protection Directive — A major compliance Subsystem Transactional VSAM Services.
and privacy protection initiative within the
DFW — DASD Fast Write.
European Union (EU) that applies to cloud
computing. Includes the Safe Harbor DICOM — Digital Imaging and Communications
Agreement. in Medicine.
Data Stream — CommVault’s patented high DIMM — Dual In-line Memory Module.
performance data mover used to move data Direct Access Storage Device (DASD) — A type of
back and forth between a data source and a storage device, in which bits of data are
MediaAgent or between 2 MediaAgents. stored at precise locations, enabling the
Data Striping — Disk array data mapping computer to retrieve information directly
technique in which fixed-length sequences of without having to scan a series of records.

Page G-6
Direct Attached Storage (DAS) — Storage that is DKU — Disk Array Frame or Disk Unit. In a
directly attached to the application or file multi-frame configuration, a frame that
server. No other device on the network can contains hard disk units (HDUs).
access the stored data. DKUPS — Disk Unit Power Supply.
Director class switches — Larger switches often DLIBs — Distribution Libraries.
used as the core of large switched fabrics.
DKUP — Disk Unit Power Supply.
Disaster Recovery Plan (DRP) — A plan that
describes how an organization will deal with DLM — Data Lifecycle Management.
potential disasters. It may include the DMA — Direct Memory Access.
precautions taken to either maintain or DM-LU — Differential Management Logical Unit.
quickly resume mission-critical functions. DM-LU is used for saving management
Sometimes also referred to as a Business information of the copy functions in the
Continuity Plan. cache.
Disk Administrator — An administrative tool that DMP — Disk Master Program.
displays the actual LU storage configuration.
DMT — Dynamic Mapping Table.
Disk Array — A linked group of 1 or more
physical independent hard disk drives DMTF — Distributed Management Task Force. A
generally used to replace larger, single disk standards organization active in cloud
drive systems. The most common disk computing.
arrays are in daisy chain configuration or DNS — Domain Name System.
implement RAID (Redundant Array of DOC — Deal Operations Center.
Independent Disks) technology.
Domain — A number of related storage array
A disk array may contain several disk drive
groups.
trays, and is structured to improve speed
and increase protection against loss of data. DOO — Degraded Operations Objective.
Disk arrays organize their data storage into DP — Dynamic Provisioning (pool).
Logical Units (LUs), which appear as linear
DP-VOL — Dynamic Provisioning Virtual Volume.
block paces to their clients. A small disk
array, with a few disks, might support up to DPL — (1) (Dynamic) Data Protection Level or (2)
8 LUs; a large one, with hundreds of disk Denied Persons List.
drives, can support thousands. DR — Disaster Recovery.
DKA ― Disk Adapter. Also called an array control DRAC — Dell Remote Access Controller.
processor (ACP). It provides the control
DRAM — Dynamic random access memory.
functions for data transfer between drives
and cache. The DKA contains DRR (Data DRP — Disaster Recovery Plan.
Recover and Reconstruct), a parity generator DRR — Data Recover and Reconstruct. Data Parity
circuit. Replaced by DKB in some cases. Generator chip on DKA.
DKB — Disk Board. Updated DKA for Hitachi DRV — Dynamic Reallocation Volume.
Unified Storage VM and additional
DSB — Dynamic Super Block.
enterprise components.
DSF — Device Support Facility.
DKC ― Disk Controller Unit. In a multi-frame
configuration, the frame that contains the DSF INIT — Device Support Facility Initialization
front end (control and memory (for DASD).
components). DSP — Disk Slave Program.
DKCMN ― Disk Controller Monitor. Monitors DT — Disaster tolerance.
temperature and power status throughout DTA —Data adapter and path to cache-switches.
the machine.
DTR — Data Transfer Rate.
DKF ― Fibre disk adapter. Another term for a
DVE — Dynamic Volume Expansion.
DKA.
DW — Duplex Write.

Page G-7
DWDM — Dense Wavelength Division ERP — Enterprise Resource Planning.
Multiplexing. ESA — Enterprise Systems Architecture.
DWL — Duplex Write Line or Dynamic ESB — Enterprise Service Bus.
Workspace Linking.
ESC — Error Source Code.
-back to top-
ESD — Enterprise Systems Division (of Hitachi)
—E— ESCD — ESCON Director.
EAL — Evaluation Assurance Level (EAL1 ESCON ― Enterprise Systems Connection. An
through EAL7). The EAL of an IT product or input/output (I/O) interface for mainframe
system is a numerical security grade computer connections to storage devices
assigned following the completion of a developed by IBM.
Common Criteria security evaluation, an ESD — Enterprise Systems Division.
international standard in effect since 1999.
ESDS — Entry Sequence Data Set.
EAV — Extended Address Volume.
ESS — Enterprise Storage Server.
EB — Exabyte.
ESW — Express Switch or E Switch. Also referred
EC — Enterprise Class (in contrast with BC, to as the Grid Switch (GSW).
Business Class).
Ethernet — A local area network (LAN)
ECC — Error Checking and Correction. architecture that supports clients and servers
ECC.DDR SDRAM — Error Correction Code and uses twisted pair cables for connectivity.
Double Data Rate Synchronous Dynamic ETR — External Time Reference (device).
RAM Memory.
EVS — Enterprise Virtual Server.
ECM — Extended Control Memory. Exabyte (EB) — A measurement of data or data
ECN — Engineering Change Notice. storage. 1EB = 1,024PB.
E-COPY — Serverless or LAN free backup. EXCP — Execute Channel Program.
EFI — Extensible Firmware Interface. EFI is a ExSA — Extended Serial Adapter.
specification that defines a software interface -back to top-
between an operating system and platform
firmware. EFI runs on top of BIOS when a —F—
LPAR is activated. FaaS — Failure as a Service. A proposed business
EHR — Electronic Health Record. model for cloud computing in which large-
EIG — Enterprise Information Governance. scale, online failure drills are provided as a
service in order to test real cloud
EMIF — ESCON Multiple Image Facility.
deployments. Concept developed by the
EMPI — Electronic Master Patient Identifier. Also College of Engineering at the University of
known as MPI. California, Berkeley in 2011.
Emulation — In the context of Hitachi Data Fabric — The hardware that connects
Systems enterprise storage, emulation is the workstations and servers to storage devices
logical partitioning of an Array Group into in a SAN is referred to as a "fabric." The SAN
logical devices. fabric enables any-server-to-any-storage
EMR — Electronic Medical Record. device connectivity through the use of Fibre
Channel switching technology.
ENC — Enclosure or Enclosure Controller. The
units that connect the controllers with the Failback — The restoration of a failed system
Fibre Channel disks. They also allow for share of a load to a replacement component.
online extending a system by adding RKAs. For example, when a failed controller in a
redundant configuration is replaced, the
EOF — End of Field.
devices that were originally controlled by
EOL — End of Life. the failed controller are usually failed back
EPO — Emergency Power Off. to the replacement controller to restore the
EREP — Error REPorting and Printing. I/O balance, and to restore failure tolerance.

Page G-8
Similarly, when a defective fan or power transmitting data between computer devices;
supply is replaced, its load, previously borne a set of standards for a serial I/O bus
by a redundant component, can be failed capable of transferring data between 2 ports.
back to the replacement part. FC RKAJ — Fibre Channel Rack Additional.
Failed over — A mode of operation for failure- Module system acronym refers to an
tolerant systems in which a component has additional rack unit that houses additional
failed and its function has been assumed by hard drives exceeding the capacity of the
a redundant component. A system that core RK unit.
protects against single failures operating in FC-0 ― Lowest layer on fibre channel transport.
failed over mode is not failure tolerant, as This layer represents the physical media.
failure of the redundant component may FC-1 ― This layer contains the 8b/10b encoding
render the system unable to function. Some scheme.
systems (e.g., clusters) are able to tolerate
FC-2 ― This layer handles framing and protocol,
more than 1 failure; these remain failure
frame format, sequence/exchange
tolerant until no redundant component is
management and ordered set usage.
available to protect against further failures.
FC-3 ― This layer contains common services used
Failover — A backup operation that automatically
by multiple N_Ports in a node.
switches to a standby database server or
network if the primary system fails, or is FC-4 ― This layer handles standards and profiles
temporarily shut down for servicing. Failover for mapping upper level protocols like SCSI
is an important fault tolerance function of an IP onto the Fibre Channel Protocol.
mission-critical systems that rely on constant FCA ― Fibre Adapter. Fibre interface card.
accessibility. Also called path failover. Controls transmission of fibre packets.
Failure tolerance — The ability of a system to FC-AL — Fibre Channel Arbitrated Loop. A serial
continue to perform its function or at a data transfer architecture developed by a
reduced performance level, when 1 or more consortium of computer and mass storage
of its components has failed. Failure device manufacturers, and is now being
tolerance in disk subsystems is often standardized by ANSI. FC-AL was designed
achieved by including redundant instances for new mass storage devices and other
of components whose failure would make peripheral devices that require very high
the system inoperable, coupled with facilities bandwidth. Using optical fiber to connect
that allow the redundant components to devices, FC-AL supports full-duplex data
assume the function of failed ones. transfer rates of 100MBps. FC-AL is
compatible with SCSI for high-performance
FAIS — Fabric Application Interface Standard.
storage systems.
FAL — File Access Library.
FCC — Federal Communications Commission.
FAT — File Allocation Table. FCIP — Fibre Channel over IP, a network storage
Fault Tolerant — Describes a computer system or technology that combines the features of
component designed so that, in the event of a Fibre Channel and the Internet Protocol (IP)
component failure, a backup component or to connect distributed SANs over large
procedure can immediately take its place with distances. FCIP is considered a tunneling
no loss of service. Fault tolerance can be protocol, as it makes a transparent point-to-
provided with software, embedded in point connection between geographically
hardware or provided by hybrid combination. separated SANs over IP networks. FCIP
FBA — Fixed-block Architecture. Physical disk relies on TCP/IP services to establish
sector mapping. connectivity between remote SANs over
FBA/CKD Conversion — The process of LANs, MANs, or WANs. An advantage of
converting open-system data in FBA format FCIP is that it can use TCP/IP as the
to mainframe data in CKD format. transport while keeping Fibre Channel fabric
services intact.
FBUS — Fast I/O Bus.
FC ― Fibre Channel or Field-Change (microcode
update) or Fibre Channel. A technology for

Page G-9
FCoE – Fibre Channel over Ethernet. An FPGA — Field Programmable Gate Array.
encapsulation of Fibre Channel frames over Frames — An ordered vector of words that is the
Ethernet networks. basic unit of data transmission in a Fibre
FCP — Fibre Channel Protocol. Channel network.
FC-P2P — Fibre Channel Point-to-Point. Front end — In client/server applications, the
FCSE — Flashcopy Space Efficiency. client part of the program is often called the
FC-SW — Fibre Channel Switched. front end and the server part is called the
FCU— File Conversion Utility. back end.
FD — Floppy Disk or Floppy Drive. FRU — Field Replaceable Unit.
FDDI — Fiber Distributed Data Interface. FS — File System.
FDR — Fast Dump/Restore. FSA — File System Module-A.
FE — Field Engineer. FSB — File System Module-B.
FED — (Channel) Front End Director.
FSI — Financial Services Industries.
Fibre Channel — A serial data transfer
FSM — File System Module.
architecture developed by a consortium of
computer and mass storage device FSW ― Fibre Channel Interface Switch PCB. A
manufacturers and now being standardized board that provides the physical interface
by ANSI. The most prominent Fibre Channel (cable connectors) between the ACP ports
standard is Fibre Channel Arbitrated Loop and the disks housed in a given disk drive.
(FC-AL). FTP ― File Transfer Protocol. A client-server
FICON — Fiber Connectivity. A high-speed protocol that allows a user on 1 computer to
input/output (I/O) interface for mainframe transfer files to and from another computer
computer connections to storage devices. As over a TCP/IP network.
part of IBM's S/390 server, FICON channels FWD — Fast Write Differential.
increase I/O capacity through the
-back to top-
combination of a new architecture and faster
physical link rates to make them up to 8 —G—
times as efficient as ESCON (Enterprise GA — General availability.
System Connection), IBM's previous fiber
GARD — General Available Restricted
optic channel standard.
Distribution.
FIPP — Fair Information Practice Principles.
Guidelines for the collection and use of Gb — Gigabit.
personal information created by the United GB — Gigabyte.
States Federal Trade Commission (FTC). Gb/sec — Gigabit per second.
FISMA — Federal Information Security
GB/sec — Gigabyte per second.
Management Act of 2002. A major
compliance and privacy protection law that GbE — Gigabit Ethernet.
applies to information systems and cloud Gbps — Gigabit per second.
computing. Enacted in the United States of
GBps — Gigabyte per second.
America in 2002.
GBIC — Gigabit Interface Converter.
FLGFAN ― Front Logic Box Fan Assembly.
GCMI — Global Competitive and Marketing
FLOGIC Box ― Front Logic Box.
Intelligence (Hitachi).
FM — Flash Memory. Each microprocessor has
GDG — Generation Data Group.
FM. FM is non-volatile memory that contains
microcode. GDPS — Geographically Dispersed Parallel
Sysplex.
FOP — Fibre Optic Processor or fibre open.
GID — Group Identifier within the UNIX security
FQDN — Fully Qualified Domain Name.
model.
FPC — Failure Parts Code or Fibre Channel
gigE — Gigabit Ethernet.
Protocol Chip.

Page G-10
GLM — Gigabyte Link Module. HDD ― Hard Disk Drive. A spindle of hard disk
Global Cache — Cache memory is used on demand platters that make up a hard drive, which is
by multiple applications. Use changes a unit of physical storage within a
dynamically, as required for READ subsystem.
performance between hosts/applications/LUs. HDDPWR — Hard Disk Drive Power.
GPFS — General Parallel File System. HDU ― Hard Disk Unit. A number of hard drives
GSC — Global Support Center. (HDDs) grouped together within a
subsystem.
GSI — Global Systems Integrator.
Head — See read/write head.
GSS — Global Solution Services.
Heterogeneous — The characteristic of containing
GSSD — Global Solutions Strategy and dissimilar elements. A common use of this
Development. word in information technology is to
GSW — Grid Switch Adapter. Also known as E describe a product as able to contain or be
Switch (Express Switch). part of a “heterogeneous network,"
GUI — Graphical User Interface. consisting of different manufacturers'
products that can interoperate.
GUID — Globally Unique Identifier.
Heterogeneous networks are made possible by
-back to top-
standards-conforming hardware and
—H— software interfaces used in common by
H1F — Essentially the floor-mounted disk rack different products, thus allowing them to
(also called desk side) equivalent of the RK. communicate with each other. The Internet
(See also: RK, RKA, and H2F). itself is an example of a heterogeneous
network.
H2F — Essentially the floor-mounted disk rack
(also called desk side) add-on equivalent HiCAM — Hitachi Computer Products America.
similar to the RKA. There is a limitation of HIPAA — Health Insurance Portability and
only 1 H2F that can be added to the core RK Accountability Act.
Floor Mounted unit. See also: RK, RKA, and HIS — (1) High Speed Interconnect. (2) Hospital
H1F. Information System (clinical and financial).
HA — High Availability. HiStar — Multiple point-to-point data paths to
Hadoop — Apache Hadoop is an open-source cache.
software framework for data storage and HL7 — Health Level 7.
large-scale processing of data-sets on
clusters of hardware. HLQ — High-level Qualifier.

HANA — High Performance Analytic Appliance, HLS — Healthcare and Life Sciences.
a database appliance technology proprietary HLU — Host Logical Unit.
to SAP. H-LUN — Host Logical Unit Number. See LUN.
HBA — Host Bus Adapter — An I/O adapter that HMC — Hardware Management Console.
sits between the host computer's bus and the
Fibre Channel loop and manages the transfer Homogeneous — Of the same or similar kind.
of information between the 2 channels. In Host — Also called a server. Basically a central
order to minimize the impact on host computer that processes end-user
processor performance, the host bus adapter applications or requests.
performs many low-level interface functions Host LU — Host Logical Unit. See also HLU.
automatically or with minimal processor
Host Storage Domains — Allows host pooling at
involvement.
the LUN level and the priority access feature
HCA — Host Channel Adapter. lets administrator set service levels for
HCD — Hardware Configuration Definition. applications.
HD — Hard Disk. HP — (1) Hewlett-Packard Company or (2) High
HDA — Head Disk Assembly. Performance.

Page G-11
HPC — High Performance Computing. —I—
HSA — Hardware System Area. I/F — Interface.
HSG — Host Security Group.
I/O — Input/Output. Term used to describe any
HSM — Hierarchical Storage Management (see program, operation, or device that transfers
Data Migrator). data to or from a computer and to or from a
HSN — Hierarchical Star Network. peripheral device.
HSSDC — High Speed Serial Data Connector. IaaS —Infrastructure as a Service. A cloud
HTTP — Hyper Text Transfer Protocol. computing business model — delivering
computer infrastructure, typically a platform
HTTPS — Hyper Text Transfer Protocol Secure.
virtualization environment, as a service,
Hub — A common connection point for devices in along with raw (block) storage and
a network. Hubs are commonly used to networking. Rather than purchasing servers,
connect segments of a LAN. A hub contains software, data center space or network
multiple ports. When a packet arrives at 1 equipment, clients buy those resources as a
port, it is copied to the other ports so that all fully outsourced service. Providers typically
segments of the LAN can see all packets. A bill such services on a utility computing
switching hub actually reads the destination basis; the amount of resources consumed
address of each packet and then forwards (and therefore the cost) will typically reflect
the packet to the correct port. Device to the level of activity.
which nodes on a multi-point bus or loop are
physically connected. IDE — Integrated Drive Electronics Advanced
Technology. A standard designed to connect
Hybrid Cloud — “Hybrid cloud computing refers
hard and removable disk drives.
to the combination of external public cloud
computing services and internal resources IDN — Integrated Delivery Network.
(either a private cloud or traditional iFCP — Internet Fibre Channel Protocol.
infrastructure, operations and applications) Index Cache — Provides quick access to indexed
in a coordinated fashion to assemble a data on the media during a browse\restore
particular solution.” — Source: Gartner operation.
Research.
IBR — Incremental Block-level Replication or
Hybrid Network Cloud — A composition of 2 or
Intelligent Block Replication.
more clouds (private, community or public).
Each cloud remains a unique entity but they ICB — Integrated Cluster Bus.
are bound together. A hybrid network cloud ICF — Integrated Coupling Facility.
includes an interconnection.
ID — Identifier.
Hypervisor — Also called a virtual machine
IDR — Incremental Data Replication.
manager, a hypervisor is a hardware
virtualization technique that enables iFCP — Internet Fibre Channel Protocol. Allows
multiple operating systems to run an organization to extend Fibre Channel
concurrently on the same computer. storage networks over the Internet by using
Hypervisors are often installed on server TCP/IP. TCP is responsible for managing
hardware then run the guest operating congestion control as well as error detection
systems that act as servers. and recovery services.
Hypervisor can also refer to the interface iFCP allows an organization to create an IP SAN
that is provided by Infrastructure as a Service fabric that minimizes the Fibre Channel
(IaaS) in cloud computing. fabric component and maximizes use of the
company's TCP/IP infrastructure.
Leading hypervisors include VMware
vSphere Hypervisor™ (ESXi), Microsoft® IFL — Integrated Facility for LINUX.
Hyper-V and the Xen® hypervisor. IHE — Integrating the Healthcare Enterprise.
-back to top-
IID — Initiator ID.
IIS — Internet Information Server.

Page G-12
ILM — Information Life Cycle Management. ISL — Inter-Switch Link.
ILO — (Hewlett-Packard) Integrated Lights-Out. iSNS — Internet Storage Name Service.
IML — Initial Microprogram Load. ISOE — iSCSI Offload Engine.
IMS — Information Management System. ISP — Internet service provider.
In-band virtualization — Refers to the location of ISPF — Interactive System Productivity Facility.
the storage network path, between the ISPF/PDF — Interactive System Productivity
application host servers in the storage Facility/Program Development Facility.
systems. Provides both control and data ISV — Independent Software Vendor.
along the same connection path. Also called ITaaS — IT as a Service. A cloud computing
symmetric virtualization.
business model. This general model is an
INI — Initiator. umbrella model that entails the SPI business
Interface —The physical and logical arrangement model (SaaS, PaaS and IaaS — Software,
supporting the attachment of any device to a Platform and Infrastructure as a Service).
connector or to another device. ITSC — Informaton and Telecommunications
Internal bus — Another name for an internal data Systems Companies.
bus. Also, an expansion bus is often referred -back to top-
to as an internal bus.
—J—
Internal data bus — A bus that operates only
Java — A widely accepted, open systems
within the internal circuitry of the CPU,
programming language. Hitachi’s enterprise
communicating among the internal caches of
software products are all accessed using Java
memory that are part of the CPU chip’s
applications. This enables storage
design. This bus is typically rather quick and
administrators to access the Hitachi
is independent of the rest of the computer’s
enterprise software products from any PC or
operations.
workstation that runs a supported thin-client
IOC — I/O controller. internet browser application and that has
IOCDS — I/O Control Data Set. TCP/IP network access to the computer on
IODF — I/O Definition file. which the software product runs.
IOPH — I/O per hour. Java VM — Java Virtual Machine.
IOS — I/O Supervisor. JBOD — Just a Bunch of Disks.
IOSQ — Input/Output Subsystem Queue. JCL — Job Control Language.
IP — Internet Protocol. The communications JMP —Jumper. Option setting method.
protocol that routes traffic across the JMS — Java Message Service.
Internet. JNL — Journal.
IPv6 — Internet Protocol Version 6. The latest JNLG — Journal Group.
revision of the Internet Protocol (IP).
JRE —Java Runtime Environment.
IPL — Initial Program Load.
JVM — Java Virtual Machine.
IPSEC — IP security.
J-VOL — Journal Volume.
IRR — Internal Rate of Return. -back to top-
ISC — Initial shipping condition or Inter-System
Communication. —K—
iSCSI — Internet SCSI. Pronounced eye skuzzy. KSDS — Key Sequence Data Set.
An IP-based standard for linking data kVA— Kilovolt Ampere.
storage devices over a network and
KVM — Kernel-based Virtual Machine or
transferring data by carrying SCSI
Keyboard-Video Display-Mouse.
commands over IP networks.
kW — Kilowatt.
ISE — Integrated Scripting Environment.
-back to top-
iSER — iSCSI Extensions for RDMA.

Page G-13
—L— networks where it is difficult to predict the
number of requests that will be issued to a
LACP — Link Aggregation Control Protocol. server. If 1 server starts to be swamped,
LAG — Link Aggregation Groups. requests are forwarded to another server
LAN — Local Area Network. A communications with more capacity. Load balancing can also
network that serves clients within a refer to the communications channels
geographical area, such as a building. themselves.
LBA — Logical block address. A 28-bit value that LOC — “Locations” section of the Maintenance
maps to a specific cylinder-head-sector Manual.
address on the disk. Logical DKC (LDKC) — Logical Disk Controller
LC — Lucent connector. Fibre Channel connector Manual. An internal architecture extension
that is smaller than a simplex connector (SC). to the Control Unit addressing scheme that
allows more LDEVs to be identified within 1
LCDG — Link Processor Control Diagnostics.
Hitachi enterprise storage system.
LCM — Link Control Module.
Longitudinal record —Patient information from
LCP — Link Control Processor. Controls the birth to death.
optical links. LCP is located in the LCM.
LPAR — Logical Partition (mode).
LCSS — Logical Channel Subsystems.
LR — Local Router.
LCU — Logical Control Unit.
LRECL — Logical Record Length.
LD — Logical Device.
LRP — Local Router Processor.
LDAP — Lightweight Directory Access Protocol.
LRU — Least Recently Used.
LDEV ― Logical Device or Logical Device
LSS — Logical Storage Subsystem (equivalent to
(number). A set of physical disk partitions
LCU).
(all or portions of 1 or more disks) that are
combined so that the subsystem sees and LU — Logical Unit. Mapping number of an LDEV.
treats them as a single area of data storage. LUN ― Logical Unit Number. 1 or more LDEVs.
Also called a volume. An LDEV has a Used only for open systems.
specific and unique address within a LUSE ― Logical Unit Size Expansion. Feature used
subsystem. LDEVs become LUNs to an to create virtual LUs that are up to 36 times
open-systems host. larger than the standard OPEN-x LUs.
LDKC — Logical Disk Controller or Logical Disk LVDS — Low Voltage Differential Signal
Controller Manual.
LVI — Logical Volume Image. Identifies a similar
LDM — Logical Disk Manager. concept (as LUN) in the mainframe
LDS — Linear Data Set. environment.
LED — Light Emitting Diode. LVM — Logical Volume Manager.
LFF — Large Form Factor. -back to top-
LIC — Licensed Internal Code. —M—
LIS — Laboratory Information Systems.
MAC — Media Access Control. A MAC address is
LLQ — Lowest Level Qualifier. a unique identifier attached to most forms of
LM — Local Memory. networking equipment.
LMODs — Load Modules. MAID — Massive array of disks.
LNKLST — Link List. MAN — Metropolitan Area Network. A
communications network that generally
Load balancing — The process of distributing
covers a city or suburb. MAN is very similar
processing and communications activity
to a LAN except it spans across a
evenly across a computer network so that no
geographical region such as a state. Instead
single device is overwhelmed. Load
of the workstations in a LAN, the
balancing is especially important for

Page G-14
workstations in a MAN could depict Microcode — The lowest-level instructions that
different cities in a state. For example, the directly control a microprocessor. A single
state of Texas could have: Dallas, Austin, San machine-language instruction typically
Antonio. The city could be a separate LAN translates into several microcode
and all the cities connected together via a instructions.
switch. This topology would indicate a Fortan Pascal C
MAN.
High-level Language
MAPI — Management Application Programming
Assembly Language
Interface.
Machine Language
Mapping — Conversion between 2 data
Hardware
addressing spaces. For example, mapping
refers to the conversion between physical Microprogram — See Microcode.
disk block addresses and the block addresses
MIF — Multiple Image Facility.
of the virtual disks presented to operating
environments by control software. Mirror Cache OFF — Increases cache efficiency
over cache data redundancy.
Mb — Megabit.
M-JNL — Primary journal volumes.
MB — Megabyte.
MM — Maintenance Manual.
MBA — Memory Bus Adaptor.
MMC — Microsoft Management Console.
MBUS — Multi-CPU Bus.
Mode — The state or setting of a program or
MC — Multi Cabinet.
device. The term mode implies a choice,
MCU — Main Control Unit, Master Control Unit, which is that you can change the setting and
Main Disk Control Unit or Master Disk put the system in a different mode.
Control Unit. The local CU of a remote copy
MP — Microprocessor.
pair. Main or Master Control Unit.
MPA — Microprocessor adapter.
MCU — Master Control Unit.
MPB – Microprocessor board.
MDPL — Metadata Data Protection Level.
MPI — (Electronic) Master Patient Identifier. Also
MediaAgent — The workhorse for all data
known as EMPI.
movement. MediaAgent facilitates the
transfer of data between the data source, the MPIO — Multipath I/O.
client computer, and the destination storage MP PK – MP Package
media. MPU — Microprocessor Unit.
Metadata — In database management systems, MQE — Metadata Query Engine (Hitachi).
data files are the files that store the database
information; whereas other files, such as MS/SG — Microsoft Service Guard.
index files and data dictionaries, store MSCS — Microsoft Cluster Server.
administrative information, known as MSS — (1) Multiple Subchannel Set. (2) Managed
metadata. Security Services.
MFC — Main Failure Code. MTBF — Mean Time Between Failure.
MG — (1) Module Group. 2 (DIMM) cache MTS — Multitiered Storage.
memory modules that work together. (2)
Multitenancy — In cloud computing,
Migration Group. A group of volumes to be
multitenancy is a secure way to partition the
migrated together.
infrastructure (application, storage pool and
MGC — (3-Site) Metro/Global Mirror. network) so multiple customers share a
MIB — Management Information Base. A database single resource pool. Multitenancy is one of
of objects that can be monitored by a the key ways cloud can achieve massive
network management system. Both SNMP economy of scale.
and RMON use standardized MIB formats M-VOL — Main Volume.
that allow any SNMP and RMON tools to
MVS — Multiple Virtual Storage.
monitor any device defined by a MIB. -back to top-

Page G-15
—N— —O—
NAS ― Network Attached Storage. A disk array OCC — Open Cloud Consortium. A standards
connected to a controller that gives access to organization active in cloud computing.
a LAN Transport. It handles data at the file OEM — Original Equipment Manufacturer.
level.
OFC — Open Fibre Control.
NAT — Network Address Translation.
OGF — Open Grid Forum. A standards
NDMP — Network Data Management Protocol. organization active in cloud computing.
A protocol meant to transport data between
NAS devices. OID — Object identifier.

NetBIOS — Network Basic Input/Output System. OLA — Operating Level Agreements.

Network — A computer system that allows OLTP — On-Line Transaction Processing.


sharing of resources, such as files and OLTT — Open-loop throughput throttling.
peripheral hardware devices. OMG — Object Management Group. A standards
Network Cloud — A communications network. organization active in cloud computing.
The word "cloud" by itself may refer to any On/Off CoD — On/Off Capacity on Demand.
local area network (LAN) or wide area
network (WAN). The terms “computing" ONODE — Object node.
and "cloud computing" refer to services OPEX — Operational Expenditure. This is an
offered on the public Internet or to a private operating expense, operating expenditure,
network that uses the same protocols as a operational expense, or operational
standard network. See also cloud computing. expenditure, which is an ongoing cost for
NFS protocol — Network File System is a protocol running a product, business, or system. Its
that allows a computer to access files over a counterpart is a capital expenditure (CAPEX).
network as easily as if they were on its local ORM — Online Read Margin.
disks. OS — Operating System.
NIM — Network Interface Module. Out-of-band virtualization — Refers to systems
NIS — Network Information Service (originally where the controller is located outside of the
called the Yellow Pages or YP). SAN data path. Separates control and data
NIST — National Institute of Standards and on different connection paths. Also called
Technology. A standards organization active asymmetric virtualization.
in cloud computing. -back to top-

NLS — Native Language Support. —P—


Node ― An addressable entity connected to an P-2-P — Point to Point. Also P-P.
I/O bus or network, used primarily to refer
to computers, storage devices, and storage PaaS — Platform as a Service. A cloud computing
subsystems. The component of a node that business model — delivering a computing
connects to the bus or network is a port. platform and solution stack as a service.
PaaS offerings facilitate deployment of
Node name ― A Name_Identifier associated with applications without the cost and complexity
a node. of buying and managing the underlying
NPV — Net Present Value. hardware, software and provisioning
NRO — Network Recovery Objective. hosting capabilities. PaaS provides all of the
facilities required to support the complete
NTP — Network Time Protocol. life cycle of building and delivering web
NVS — Non Volatile Storage. applications and services entirely from the
-back to top- Internet.
PACS – Picture Archiving and Communication
System.

Page G-16
PAN — Personal Area Network. A PDM — Policy based Data Migration or Primary
communications network that transmit data Data Migrator.
wirelessly over a short distance. Bluetooth PDS — Partitioned Data Set.
and Wi-Fi Direct are examples of personal
PDSE — Partitioned Data Set Extended.
area networks.
Performance — Speed of access or the delivery of
PAP — Password Authentication Protocol.
information.
Parity — A technique of checking whether data Petabyte (PB) — A measurement of capacity — the
has been lost or written over when it is amount of data that a drive or storage
moved from 1 place in storage to another or system can store after formatting. 1PB =
when it is transmitted between computers. 1,024TB.
Parity Group — Also called an array group. This is PFA — Predictive Failure Analysis.
a group of hard disk drives (HDDs) that
PFTaaS — Private File Tiering as a Service. A cloud
form the basic unit of storage in a subsystem.
computing business model.
All HDDs in a parity group must have the
same physical capacity. PGP — Pretty Good Privacy (encryption).
Partitioned cache memory — Separate workloads PGR — Persistent Group Reserve.
in a “storage consolidated” system by PI — Product Interval.
dividing cache into individually managed PIR — Performance Information Report.
multiple partitions. Then customize the PiT — Point-in-Time.
partition to match the I/O characteristics of
assigned LUs. PK — Package (see PCB).

PAT — Port Address Translation. PL — Platter. The circular disk on which the
magnetic data is stored. Also called
PATA — Parallel ATA. motherboard or backplane.
Path — Also referred to as a transmission channel, PM — Package Memory.
the path between 2 nodes of a network that a
POC — Proof of concept.
data communication follows. The term can
refer to the physical cabling that connects the Port — In TCP/IP and UDP networks, an
nodes on a network, the signal that is endpoint to a logical connection. The port
communicated over the pathway or a sub- number identifies what type of port it is. For
channel in a carrier frequency. example, port 80 is used for HTTP traffic.

Path failover — See Failover. POSIX — Portable Operating System Interface for
UNIX. A set of standards that defines an
PAV — Parallel Access Volumes. application programming interface (API) for
PAWS — Protect Against Wrapped Sequences. software designed to run under
PB — Petabyte. heterogeneous operating systems.
PBC — Port By-pass Circuit. PP — Program product.
PCB — Printed Circuit Board. P-P — Point-to-point; also P2P.
PCHIDS — Physical Channel Path Identifiers. PPRC — Peer-to-Peer Remote Copy.
PCI — Power Control Interface. Private Cloud — A type of cloud computing
defined by shared capabilities within a
PCI CON — Power Control Interface Connector
single company; modest economies of scale
Board.
and less automation. Infrastructure and data
PCI DSS — Payment Card Industry Data Security reside inside the company’s data center
Standard. behind a firewall. Comprised of licensed
PCIe — Peripheral Component Interconnect software tools rather than on-going services.
Express.
PD — Product Detail. Example: An organization implements its
own virtual, scalable cloud and business
PDEV— Physical Device. units are charged on a per use basis.

Page G-17
Private Network Cloud — A type of cloud QoS — Quality of Service. In the field of computer
network with 3 characteristics: (1) Operated networking, the traffic engineering term
solely for a single organization, (2) Managed quality of service (QoS) refers to resource
internally or by a third-party, (3) Hosted reservation control mechanisms rather than
internally or externally. the achieved service quality. Quality of
PR/SM — Processor Resource/System Manager. service is the ability to provide different
priority to different applications, users, or
Protocol — A convention or standard that enables
data flows, or to guarantee a certain level of
the communication between 2 computing
performance to a data flow.
endpoints. In its simplest form, a protocol
can be defined as the rules governing the QSAM — Queued Sequential Access Method.
syntax, semantics, and synchronization of -back to top-
communication. Protocols may be —R—
implemented by hardware, software, or a
combination of the 2. At the lowest level, a RACF — Resource Access Control Facility.
protocol defines the behavior of a hardware RAID ― Redundant Array of Independent Disks,
connection. or Redundant Array of Inexpensive Disks. A
group of disks that look like a single volume
Provisioning — The process of allocating storage
resources and assigning storage capacity for to the server. RAID improves performance
an application, usually in the form of server by pulling a single stripe of data from
multiple disks, and improves fault-tolerance
disk drive space, in order to optimize the
performance of a storage area network either through mirroring or parity checking
and it is a component of a customer’s SLA.
(SAN). Traditionally, this has been done by
the SAN administrator, and it can be a RAID-0 — Striped array with no parity.
tedious process. In recent years, automated RAID-1 — Mirrored array and duplexing.
storage provisioning (also called auto- RAID-3 — Striped array with typically non-
provisioning) programs have become rotating parity, optimized for long, single-
available. These programs can reduce the threaded transfers.
time required for the storage provisioning
RAID-4 — Striped array with typically non-
process, and can free the administrator from
rotating parity, optimized for short, multi-
the often distasteful task of performing this
threaded transfers.
chore manually.
RAID-5 — Striped array with typically rotating
PS — Power Supply.
parity, optimized for short, multithreaded
PSA — Partition Storage Administrator . transfers.
PSSC — Perl Silicon Server Control. RAID-6 — Similar to RAID-5, but with dual
PSU — Power Supply Unit. rotating parity physical disks, tolerating 2
PTAM — Pickup Truck Access Method. physical disk failures.
PTF — Program Temporary Fixes. RAIN — Redundant (or Reliable) Array of
Independent Nodes (architecture).
PTR — Pointer.
PU — Processing Unit. RAM — Random Access Memory.
RAM DISK — A LUN held entirely in the cache
Public Cloud — Resources, such as applications
and storage, available to the general public area.
over the Internet. RAS — Reliability, Availability, and Serviceability
P-VOL — Primary Volume. or Row Address Strobe.
-back to top- RBAC — Role Base Access Control.

—Q— RC — (1) Reference Code or (2) Remote Control.

QD — Quorum Device RCHA — RAID Channel Adapter.


QDepth — The number of I/O operations that can RCP — Remote Control Processor.
run in parallel on a SAN device; also WWN RCU — Remote Control Unit or Remote Disk
QDepth. Control Unit.

Page G-18
RCUT — RCU Target. language and development environment,
RD/WR — Read/Write. can write object-oriented programming in
which objects on different computers can
RDM — Raw Disk Mapped. interact in a distributed network. RMI is the
RDMA — Remote Direct Memory Access. Java version of what is generally known as a
RDP — Remote Desktop Protocol. RPC (remote procedure call), but with the
ability to pass 1 or more objects along with
RDW — Record Descriptor Word. the request.
Read/Write Head — Read and write data to the RndRD — Random read.
platters, typically there is 1 head per platter
side, and each head is attached to a single ROA — Return on Asset.
actuator shaft. RoHS — Restriction of Hazardous Substances (in
RECFM — Record Format Redundant. Describes Electrical and Electronic Equipment).
the computer or network system ROI — Return on Investment.
components, such as fans, hard disk drives, ROM — Read Only Memory.
servers, operating systems, switches, and
telecommunication links that are installed to Round robin mode — A load balancing technique
back up primary resources in case they fail. which distributes data packets equally
among the available paths. Round robin
A well-known example of a redundant DNS is usually used for balancing the load
system is the redundant array of of geographically distributed Web servers. It
independent disks (RAID). Redundancy works on a rotating basis in that one server
contributes to the fault tolerance of a system. IP address is handed out, then moves to the
back of the list; the next server IP address is
Redundancy — Backing up a component to help
handed out, and then it moves to the end of
ensure high availability.
the list; and so on, depending on the number
Reliability — (1) Level of assurance that data will of servers being used. This works in a
not be lost or degraded over time. (2) An looping fashion.
attribute of any commuter component
Router — A computer networking device that
(software, hardware, or a network) that
forwards data packets toward their
consistently performs according to its
destinations, through a process known as
specifications.
routing.
REST — Representational State Transfer.
RPC — Remote procedure call.
REXX — Restructured extended executor.
RPO — Recovery Point Objective. The point in
RID — Relative Identifier that uniquely identifies time that recovered data should match.
a user or group within a Microsoft Windows
RPSFAN — Rear Power Supply Fan Assembly.
domain.
RRDS — Relative Record Data Set.
RIS — Radiology Information System.
RS CON — RS232C/RS422 Interface Connector.
RISC — Reduced Instruction Set Computer.
RSD — RAID Storage Division (of Hitachi).
RIU — Radiology Imaging Unit.
R-SIM — Remote Service Information Message.
R-JNL — Secondary journal volumes.
RSM — Real Storage Manager.
RK — Rack additional.
RTM — Recovery Termination Manager.
RKAJAT — Rack Additional SATA disk tray.
RTO — Recovery Time Objective. The length of
RKAK — Expansion unit.
time that can be tolerated between a disaster
RLGFAN — Rear Logic Box Fan Assembly. and recovery of data.
RLOGIC BOX — Rear Logic Box. R-VOL — Remote Volume.
RMF — Resource Measurement Facility. R/W — Read/Write.
RMI — Remote Method Invocation. A way that a -back to top-
programmer, using the Java programming

Page G-19
—S— SBM — Solutions Business Manager.

SA — Storage Administrator. SBOD — Switched Bunch of Disks.

SA z/OS — System Automation for z/OS. SBSC — Smart Business Storage Cloud.
SBX — Small Box (Small Form Factor).
SAA — Share Access Authentication. The process
of restricting a user's rights to a file system SC — (1) Simplex connector. Fibre Channel
object by combining the security descriptors connector that is larger than a Lucent
from both the file system object itself and the connector (LC). (2) Single Cabinet.
share to which the user is connected. SCM — Supply Chain Management.
SaaS — Software as a Service. A cloud computing SCP — Secure Copy.
business model. SaaS is a software delivery SCSI — Small Computer Systems Interface. A
model in which software and its associated parallel bus architecture and a protocol for
data are hosted centrally in a cloud and are transmitting large data blocks up to a
typically accessed by users using a thin distance of 15 to 25 meters.
client, such as a web browser via the
SD — Software Division (of Hitachi).
Internet. SaaS has become a common
delivery model for most business SDH — Synchronous Digital Hierarchy.
applications, including accounting (CRM SDM — System Data Mover.
and ERP), invoicing (HRM), content SDSF — Spool Display and Search Facility.
management (CM) and service desk Sector — A sub-division of a track of a magnetic
management, just to name the most common disk that stores a fixed amount of data.
software that runs in the cloud. This is the
fastest growing service in the cloud market SEL — System Event Log.
today. SaaS performs best for relatively Selectable segment size — Can be set per partition.
simple tasks in IT-constrained organizations. Selectable Stripe Size — Increases performance by
SACK — Sequential Acknowledge. customizing the disk access size.
SACL — System ACL. The part of a security SENC — Is the SATA (Serial ATA) version of the
descriptor that stores system auditing ENC. ENCs and SENCs are complete
information. microprocessor systems on their own and
they occasionally require a firmware
SAIN — SAN-attached Array of Independent upgrade.
Nodes (architecture).
SeqRD — Sequential read.
SAN ― Storage Area Network. A network linking
Serial Transmission — The transmission of data
computing devices to disk or tape arrays and
bits in sequential order over a single line.
other devices over Fibre Channel. It handles
data at the block level. Server — A central computer that processes
end-user applications or requests, also called
SAP — (1) System Assist Processor (for I/O
a host.
processing), or (2) a German software
company. Server Virtualization — The masking of server
resources, including the number and identity
SAP HANA — High Performance Analytic of individual physical servers, processors,
Appliance, a database appliance technology and operating systems, from server users.
proprietary to SAP. The implementation of multiple isolated
SARD — System Assurance Registration virtual environments in one physical server.
Document. Service-level Agreement — SLA. A contract
SAS —Serial Attached SCSI. between a network service provider and a
SATA — Serial ATA. Serial Advanced Technology customer that specifies, usually in
Attachment is a new standard for connecting measurable terms, what services the network
hard drives into computer systems. SATA is service provider will furnish. Many Internet
based on serial signaling technology, unlike service providers (ISP) provide their
current IDE (Integrated Drive Electronics) customers with a SLA. More recently, IT
hard drives that use parallel signaling. departments in major enterprises have

Page G-20
adopted the idea of writing a service level guidance information. (2) Storage Interface
agreement so that services for their Module. (3) Subscriber Identity Module.
customers (users in other departments SIM RC — Service (or system) Information
within the enterprise) can be measured, Message Reference Code.
justified, and perhaps compared with those
SIMM — Single In-line Memory Module.
of outsourcing network providers.
SLA —Service Level Agreement.
Some metrics that SLAs may specify include:
SLO — Service Level Objective.
• The percentage of the time services will be
available SLRP — Storage Logical Partition.
SM ― Shared Memory or Shared Memory Module.
• The number of users that can be served
Stores the shared information about the
simultaneously
subsystem and the cache control information
• Specific performance benchmarks to (director names). This type of information is
which actual performance will be used for the exclusive control of the
periodically compared subsystem. Like CACHE, shared memory is
• The schedule for notification in advance of controlled as 2 areas of memory and fully non-
network changes that may affect users volatile (sustained for approximately 7 days).
• Help desk response time for various SM PATH— Shared Memory Access Path. The
classes of problems Access Path from the processors of CHA,
• Dial-in access availability DKA PCB to Shared Memory.
• Usage statistics that will be provided SMB/CIFS — Server Message Block
Protocol/Common Internet File System.
Service-Level Objective — SLO. Individual
SMC — Shared Memory Control.
performance metrics built into an SLA. Each
SLO corresponds to a single performance SME — Small and Medium Enterprise
characteristic relevant to the delivery of an SMF — System Management Facility.
overall service. Some examples of SLOs SMI-S — Storage Management Initiative
include: system availability, help desk Specification.
incident resolution time, and application
SMP — Symmetric Multiprocessing. An IBM-
response time.
licensed program used to install software
SES — SCSI Enclosure Services. and software changes on z/OS systems.
SFF — Small Form Factor. SMP/E — System Modification
SFI — Storage Facility Image. Program/Extended.
SFM — Sysplex Failure Management. SMS — System Managed Storage.
SFP — Small Form-Factor Pluggable module Host SMTP — Simple Mail Transfer Protocol.
connector. A specification for a new SMU — System Management Unit.
generation of optical modular transceivers. Snapshot Image — A logical duplicated volume
The devices are designed for use with small (V-VOL) of the primary volume. It is an
form factor (SFF) connectors, offer high internal volume intended for restoration.
speed and physical compactness, and are SNIA — Storage Networking Industry
hot-swappable. Association. An association of producers and
SHSN — Shared memory Hierarchical Star consumers of storage networking products,
Network. whose goal is to further storage networking
SID — Security Identifier. A user or group technology and applications. Active in cloud
identifier within the Microsoft Windows computing.
security model. SNMP — Simple Network Management Protocol.
SIGP — Signal Processor. A TCP/IP protocol that was designed for
SIM — (1) Service Information Message. A management of networks over TCP/IP,
message reporting an error that contains fix using agents and stations.
SOA — Service Oriented Architecture.

Page G-21
SOAP — Simple object access protocol. A way for SRM — Site Recovery Manager.
a program running in one kind of operating SSB — Sense Byte.
system (such as Windows 2000) to
SSC — SiliconServer Control.
communicate with a program in the same or
another kind of an operating system (such as SSCH — Start Subchannel.
Linux) by using the World Wide Web's SSD — Solid-state Drive or Solid-State Disk.
Hypertext Transfer Protocol (HTTP) and its SSH — Secure Shell.
Extensible Markup Language (XML) as the
SSID — Storage Subsystem ID or Subsystem
mechanisms for information exchange.
Identifier.
Socket — In UNIX and some other operating
SSL — Secure Sockets Layer.
systems, socket is a software object that
connects an application to a network SSPC — System Storage Productivity Center.
protocol. In UNIX, for example, a program SSUE — Split SUSpended Error.
can send and receive TCP/IP messages by SSUS — Split SUSpend.
opening a socket and reading and writing
SSVP — Sub Service Processor interfaces the SVP
data to and from the socket. This simplifies
to the DKC.
program development because the
programmer need only worry about SSW — SAS Switch.
manipulating the socket and can rely on the Sticky Bit — Extended UNIX mode bit that
operating system to actually transport prevents objects from being deleted from a
messages across the network correctly. Note directory by anyone other than the object's
that a socket in this sense is completely soft; owner, the directory's owner or the root user.
it is a software object, not a physical Storage pooling — The ability to consolidate and
component. manage storage resources across storage
SOM — System Option Mode. system enclosures where the consolidation
SONET — Synchronous Optical Network. of many appears as a single view.
SOSS — Service Oriented Storage Solutions. STP — Server Time Protocol.
SPaaS — SharePoint as a Service. A cloud STR — Storage and Retrieval Systems.
computing business model. Striping — A RAID technique for writing a file to
SPAN — Span is a section between 2 intermediate multiple disks on a block-by-block basis,
supports. See Storage pool. with or without parity.
Spare — An object reserved for the purpose of Subsystem — Hardware or software that performs
substitution for a like object in case of that a specific function within a larger system.
object's failure. SVC — Supervisor Call Interruption.
SPC — SCSI Protocol Controller. SVC Interrupts — Supervisor calls.
SpecSFS — Standard Performance Evaluation S-VOL — (1) (ShadowImage) Source Volume for
Corporation Shared File system. In-System Replication, or (2) (Universal
SPECsfs97 — Standard Performance Evaluation Replicator) Secondary Volume.
Corporation (SPEC) System File Server (sfs) SVP — Service Processor ― A laptop computer
developed in 1997 (97). mounted on the control frame (DKC) and
SPI model — Software, Platform and used for monitoring, maintenance and
Infrastructure as a service. A common term administration of the subsystem.
to describe the cloud computing “as a service” Switch — A fabric device providing full
business model. bandwidth per port and high-speed routing
SRA — Storage Replicator Adapter. of data via link-level addressing.
SRDF/A — (EMC) Symmetrix Remote Data SWPX — Switching power supply.
Facility Asynchronous. SXP — SAS Expander.
SRDF/S — (EMC) Symmetrix Remote Data Symmetric virtualization — See In-band
Facility Synchronous. virtualization.

Page G-22
Synchronous — Operations that have a fixed time storage cost. Categories may be based on
relationship to each other. Most commonly levels of protection needed, performance
used to denote I/O operations that occur in requirements, frequency of use, and other
time sequence, i.e., a successor operation does considerations. Since assigning data to
not occur until its predecessor is complete. particular media may be an ongoing and
-back to top- complex activity, some vendors provide
software for automatically managing the
—T— process based on a company-defined policy.
Target — The system component that receives a Tiered Storage Promotion — Moving data
SCSI I/O command, an open device that between tiers of storage as their availability
operates at the request of the initiator.
requirements change.
TB — Terabyte. 1TB = 1,024GB. TLS — Tape Library System.
TCDO — Total Cost of Data Ownership.
TLS — Transport Layer Security.
TCO — Total Cost of Ownership. TMP — Temporary or Test Management Program.
TCP/IP — Transmission Control Protocol over
TOD (or ToD) — Time Of Day.
Internet Protocol.
TOE — TCP Offload Engine.
TDCONV — Trace Dump CONVerter. A software
program that is used to convert traces taken Topology — The shape of a network or how it is
on the system into readable text. This laid out. Topologies are either physical or
information is loaded into a special logical.
spreadsheet that allows for further TPC-R — Tivoli Productivity Center for
investigation of the data. More in-depth Replication.
failure analysis. TPF — Transaction Processing Facility.
TDMF — Transparent Data Migration Facility. TPOF — Tolerable Points of Failure.
Telco or TELCO — Telecommunications Track — Circular segment of a hard disk or other
Company. storage media.
TEP — Tivoli Enterprise Portal. Transfer Rate — See Data Transfer Rate.
Terabyte (TB) — A measurement of capacity, data Trap — A program interrupt, usually an interrupt
or data storage. 1TB = 1,024GB. caused by some exceptional situation in the
TFS — Temporary File System. user program. In most cases, the Operating
TGTLIBs — Target Libraries. System performs some action, and then
returns control to the program.
THF — Front Thermostat.
TSC — Tested Storage Configuration.
Thin Provisioning — Thin provisioning allows
storage space to be easily allocated to servers TSO — Time Sharing Option.
on a just-enough and just-in-time basis. TSO/E — Time Sharing Option/Extended.
THR — Rear Thermostat. T-VOL — (ShadowImage) Target Volume for
Throughput — The amount of data transferred In-System Replication.
from 1 place to another or processed in a -back to top-

specified amount of time. Data transfer rates —U—


for disk drives and networks are measured
UA — Unified Agent.
in terms of throughput. Typically,
throughputs are measured in kbps, Mbps UBX — Large Box (Large Form Factor).
and Gb/sec. UCB — Unit Control Block.
TID — Target ID. UDP — User Datagram Protocol is 1 of the core
Tiered storage — A storage strategy that matches protocols of the Internet protocol suite.
data classification to storage metrics. Tiered Using UDP, programs on networked
storage is the assignment of different computers can send short messages known
categories of data to different types of as datagrams to one another.
storage media in order to reduce total UFA — UNIX File Attributes.

Page G-23
UID — User Identifier within the UNIX security VLL — Virtual Logical Volume Image/Logical
model. Unit Number.
UPS — Uninterruptible Power Supply — A power VLUN — Virtual LUN. Customized volume. Size
supply that includes a battery to maintain chosen by user.
power in the event of a power outage.
VLVI — Virtual Logic Volume Image. Marketing
UR — Universal Replicator. name for CVS (custom volume size).
UUID — Universally Unique Identifier. VM — Virtual Machine.
-back to top-
VMDK — Virtual Machine Disk file format.
—V— VNA — Vendor Neutral Archive.
vContinuum — Using the vContinuum wizard, VOJP — (Cache) Volatile Jumper.
users can push agents to primary and
secondary servers, set up protection and VOLID — Volume ID.
perform failovers and failbacks. VOLSER — Volume Serial Numbers.
VCS — Veritas Cluster System. Volume — A fixed amount of storage on a disk or
VDEV — Virtual Device. tape. The term volume is often used as a
synonym for the storage medium itself, but
VDI — Virtual Desktop Infrastructure. it is possible for a single disk to contain more
VHD — Virtual Hard Disk. than 1 volume or for a volume to span more
VHDL — VHSIC (Very-High-Speed Integrated than 1 disk.
Circuit) Hardware Description Language. VPC — Virtual Private Cloud.
VHSIC — Very-High-Speed Integrated Circuit. VSAM — Virtual Storage Access Method.
VI — Virtual Interface. A research prototype that VSD — Virtual Storage Director.
is undergoing active development, and the VTL — Virtual Tape Library.
details of the implementation may change
considerably. It is an application interface VSP — Virtual Storage Platform.
that gives user-level processes direct but VSS — (Microsoft) Volume Shadow Copy Service.
protected access to network interface cards. VTOC — Volume Table of Contents.
This allows applications to bypass IP
VTOCIX — Volume Table of Contents Index.
processing overheads (for example, copying
data, computing checksums) and system call VVDS — Virtual Volume Data Set.
overheads while still preventing 1 process V-VOL — Virtual Volume.
from accidentally or maliciously tampering -back to top-
with or reading data being used by another.
Virtualization — Referring to storage
—W—
virtualization, virtualization is the WAN — Wide Area Network. A computing
amalgamation of multiple network storage internetwork that covers a broad area or
devices into what appears to be a single region. Contrast with PAN, LAN and MAN.
storage unit. Storage virtualization is often
used in a SAN, and makes tasks such as WDIR — Directory Name Object.
archiving, backup and recovery easier and WDIR — Working Directory.
faster. Storage virtualization is usually
implemented via software applications. WDS — Working Data Set.
WebDAV — Web-based Distributed Authoring
There are many additional types of and Versioning (HTTP extensions).
virtualization.
Virtual Private Cloud (VPC) — Private cloud WFILE — File Object or Working File.
existing within a shared or public cloud (for WFS — Working File Set.
example, the Intercloud). Also known as a
virtual private network cloud. WINS — Windows Internet Naming Service.

Page G-24
WL — Wide Link. —Y—
WLM — Work Load Manager. YB — Yottabyte.
WORM — Write Once, Read Many. Yottabyte — A highest-end measurement of data
WSDL — Web Services Description Language. at the present time. 1YB = 1,024ZB, or 1
quadrillion GB. A recent estimate (2011) is
WSRM — Write Seldom, Read Many. that all the computer hard drives in the
world do not contain 1YB of data.
WTREE — Directory Tree Object or Working Tree.
-back to top-
WWN ― World Wide Name. A unique identifier
for an open-system host. It consists of a 64-
bit physical address (the IEEE 48-bit format —Z—
with a 12-bit extension and a 4-bit prefix). z/OS — z Operating System (IBM® S/390® or
z/OS® Environments).
WWNN — World Wide Node Name. A globally
unique 64-bit identifier assigned to each z/OS NFS — (System) z/OS Network File System.
Fibre Channel node process. z/OSMF — (System) z/OS Management Facility.
WWPN ― World Wide Port Name. A globally zAAP — (System) z Application Assist Processor
unique 64-bit identifier assigned to each (for Java and XML workloads).
Fibre Channel port. A Fibre Channel port’s ZCF — Zero Copy Failover. Also known as Data
WWPN is permitted to use any of several Access Path (DAP).
naming authorities. Fibre Channel specifies a
Zettabyte (ZB) — A high-end measurement of
Network Address Authority (NAA) to
data at the present time. 1ZB = 1,024EB.
distinguish between the various name
registration authorities that may be used to zFS — (System) zSeries File System.
identify the WWPN. zHPF — (System) z High Performance FICON.
-back to top- zIIP — (System) z Integrated Information
Processor (specialty processor for database).
—X—
Zone — A collection of Fibre Channel Ports that
XAUI — "X"=10, AUI = Attachment Unit Interface. are permitted to communicate with each
other via the fabric.
XCF — Cross System Communications Facility.
Zoning — A method of subdividing a storage area
XDS — Cross Enterprise Document Sharing. network into disjoint zones, or subsets of
XDSi — Cross Enterprise Document Sharing for nodes on the network. Storage area network
Imaging. nodes outside a zone are invisible to nodes
within the zone. Moreover, with switched
XFI — Standard interface for connecting 10Gb SANs, traffic within each zone may be
Ethernet MAC device to XFP interface. physically isolated from traffic outside the
zone.
XFP — "X"=10Gb Small Form Factor Pluggable.
-back to top-
XML — eXtensible Markup Language.
XRC — Extended Remote Copy.
-back to top-

Page G-25
Page G-26
Evaluating This Course
Please use the online evaluation system to help improve our
courses.

1. Sign in to Hitachi University.

https://hitachiuniversity/Web/Main

2. Click on My Learning. The Transcript page will open.

Page E-1
Evaluating This Course

3. On the Transcript page, click the down arrow in the Active menu.

4. In the Active menu, select Completed. Your completed courses will display.

5. Choose the completed course you want to evaluate.

6. Click the down arrow in the View Certificate drop down menu.

7. Select Evaluate to launch the evaluation form.

Page E-2