Vous êtes sur la page 1sur 25

Changing DataProtection from

nightmares to sweet dreams

Hyper2 Architecture Complete, Integrated Data Solution

March 2019
MAIKEL KELDER
Benelux Sr. Systems Engineer
The Storage Iceberg
Uncontrolled growth
• 6 ZB in 2016
• 93 ZB in 2025
• 80% unstructured

Too many redundant copies


Fragmented • 45% - 60% of capacity for data
copies
Inefficient • 10-12 copies on average
Dark Data • $50B problem

Legacy storage can’t keep up


• Doesn’t scale
• Silos
• Inefficient
• Dark data

2
Confidential & Proprietary
Secondary Storage Pain Points

LACK OF INCREMENTAL SCALABILITY DATA IS DARK


• Scale-up architectures result in forklift upgrades, • Little or no visibility into data that is backed up
overprovisioning, and data migrations • Non-granular index and search capabilities limited
• Scale-out architectures add compute and storage in to silos within the secondary infrastructure
limited fixed ratios to homogenous clusters
• Delays data recovery SLAs

INEFFICIENT SECONDARY INFRASTRUCTURE LACK OF CONSOLIDATION


• Powerful infrastructure sitting idle most of the time • Siloed solutions, products, and vendors with limited
• Backup software licenses cost as much as the interoperability for backup, archive, and cloud
physical storage and compute • Many copies of data being continuously read from
• Physical infrastructure cost is high due to lack of primary storage over and over again
commodity hardware (proprietary hardware)

3 Confidential & Proprietary


4 ©2019 Cohesity, Inc. Confidential & Proprietary.
What Causes Mass Data Fragmentation?

1 FRAGMENTATION ACROSS SILOS

BACKUP & FILE & OBJECT ARCHIVING/ TEST & SEARCH/


RECOVERY SERVERS LTR DEVELOPMENT ANALYTICS

Software Tape Software Shares Software Servers Servers Policies Servers Software
2

Media/Master Servers Appliances NAS Indexing Storage Masking Copies Search Storage

FRAGMENTATION WITHIN SILOS


FRAGMENTATION
WITHIN SILOS

5
Confidential & Proprietary
Fragmentation Worsens with More Locations
4 FRAGMENTATION FROM REDUNDANT COPIES 4 FRAGMENTATION FROM REDUNDANT COPIES

Data Centers/ROBOs Clouds


1 FRAGMENTATION ACROSS SILOS 1 FRAGMENTATION ACROSS SILOS

BACKUP & FILE & OBJECT ARCHIVING/ TEST & SEARCH/ CLOUD FILE & OBJECT ARCHIVING/ TEST & SEARCH/
RECOVERY SERVERS LTR DEVELOPMENT ANALYTICS BACKUP STORAGE LTR DEVELOPMENT ANALYTICS

3 FRAGMENTATION ACROSS LOCATIONS


Software Tape Software Shares Software Servers Servers Policies Servers Software
2 2

Media/Master Svrs Appliances NAS Indexing Storage Masking Copies Search Storage

FRAGMENTATION FRAGMENTATION
WITHIN SILOS WITHIN SILOS

6
Confidential & Proprietary
Traditional Data Protection Solutions
• Fragmented
• Silos
Master Server
Home/Users/Archive File Share
• Inefficient
Primary
• Individual scale-up
Storage
Proxy Servers Test & Dev
• No global dedupe
• Slow recoveries
VMware • A waste of a Backup
Backup Target Storage
• Multiple UIs / APIs

Physical • Dark Data


Analytics/GDPR • GDPR
• Largely Unencrypted
Database Servers
Cloud GWs
Consolidation Begins
• Entirely Non Disruptive
• Unlimited Scale-out
Master Server
Home/Users/Archive File Share
• Non disruptive Upgrades
• No downtime
Primary
• Global Dedup
Proxy Servers Test & Dev
Storage

VMware
Backup Target Storage
Backup Target Storage

Physical
Analytics/GDPR

Database Servers
Cloud GWs
Consolidation Continues
• Backup Consolidation

Master Server
Home/Users/Archive File Share
• Cloud Consolidation
• Any NFS or S3 target
Primary
• Disaster recovery
Storage
Proxy Servers Test & Dev
• Long retention

VMware • File & Object Consolidation

Physical
Analytics

Database Servers
Cloud GWs
Consolidation done
• Backup Consolidation

• Cloud Consolidation
• Any NFS or S3 target
Primary
• Disaster recovery
Storage • Long retention

VMware • File & Object Consolidation

Physical

Database Servers
Incremental Scale
TRADITIONAL STORAGE INCREMENTAL SCALE WITH COHESITY

• Siloed storage systems • Fully distributed file system spanning all nodes (SpanFS™)
• Primarily scale-up architectures • Nondisruptively expand, upgrade, and remove nodes
• Forklift upgrades and data migrations when EOL • Independently scale compute and storage
• Limited scale-out of systems (cluster boundaries) • No forklift upgrades, data migrations, or storage silos

Expand
Remove Upgrade

11 Confidential & Proprietary


Near Linear Scaling on Massive 256-Node Cluster

12
13 ©2019 Cohesity, Inc. Confidential & Proprietary.
UCS C240-M5 – Rear

4
8

1 2 3 5 6 7

1 Intel X520-DA2 10Gbps 2 port NIC 5 VGA Port


2 (2) USB Ports 6 CIMC (IPMI) OOB Management Port
3 (2) Onboard 1GbE Ports 7 Console Port
4 Intel i350 Quad Port 1GBase-T NIC 8 Power Supply Unit

Confidential & Proprietary 15


UCS C240-M5 Cabling (10GbE – SFP+)
10GbE Switch (SFP+) 10GbE Switch (SFP+)

1 2 3 4
Same VLAN 1 2 3 4

4 4 4 3 3 3

2 2 2 1 1 1 1

1 2 3 4
Laptop
1GbE Switch (RJ45) (Initial Setup Only)
(IPMI OOB Management)

Confidential & Proprietary 16


Reference Architecture for Cisco Validated Design Document
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/ucsc240_cohesi
ty_dp.html
Secondary Domain Primary Domain

Cisco C240 M5L


servers connected to
and managed by
Cisco Fabric
Interconnects

Primary Domain
Cisco and Cohesity consolidate all of Enterprise primary and secondary data via integrated,
hyperconverged platforms for greater simplicity, visibility, agility and access across clouds.
HyperFlex Data Protection
Fast and Flexible Native Snapshots

• Pointer-based snapshots
• Space-efficient with no performance
penalty vs. Vmware Redo Log Snaps
• Fast creations and deletions
• Controllable via the native HyperFlex
RESTful API
• Fine-grained or coarse-grained
• VM-level or VM folder-level
• VAAI-integrated
• Quiesced and crash-consistent
• Use vCenter Snapshot Manager
• Policy-based schedules and
retention
Cohesity/Cisco Snapshot Process
Cisco HyperFlex cluster is registered with the Cohesity cluster
as a storage snapshot provider alongside other sources

1 Verify VM is HX native snapshot eligible — runs on HX


and does not span multiple datastores.

ESXi Host
2 Attempt to take HX snapshot using HX APIs —
creating a sentinel and VM snapshot

HX API
Request 2 2 Sentinel 3 If the HX snapshot request fails, fall back to VMware
by Cohesity snapshot
DataPlatform 2
HX API

Fetch CBT info from VMware, use snapshot to


Request
4 NBD 4 construct VM and backup VM using NBD transport.
Changed Block Transport
for CBT info from 4 4
Tracking (CBT) Info
Cohesity
DataPlatform
Delete the snapshot through HX API, updating the
5 sentinel snapshot

19
Transform IT with Complete Hyperconvergence
From Edge to Core to Cloud

Copyright 2018 Cohesity Inc, All Rights Reserved


Are you building
for the future today?
…or just keeping the lights on?
Modernize on a
Platform That Will
Simplify the Future
for Your Data &
Apps
Cohesity Software NOW available on
Cisco’s GPL
• Cohesity Subscription, Perpetual (July release) and
Add-on Software licenses available.
• Licensing is subscription based per TB
• DataPlatform is based on usable capacity
• DataProtect & other licensing are add-ons
• Cohesity Ordering Guide
• Includes CCW Estimates for UCS configurations
• Cohesity on dCloud
Cohesity
on dCloud

Vous aimerez peut-être aussi