Académique Documents
Professionnel Documents
Culture Documents
Reference Architecture
A Reference Architecture for the design, configuration and implementation of a
VMware Horizon environment.
Dell Cloud Client-Computing Engineering
August 2016
Revisions
Date
Description
May 2014
July 2015
Updated density numbers for ESXi 6.0 and added PowerEdge C4130 (v.6.7)
April 2016
August 2016
Updated list of Thin Clients, added NSX and M60 GPU (v.7.1)
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.
Copyright 2016 Dell Inc. All rights reserved. Dell and the Dell logo are trademarks of Dell Inc. in the United States
and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective
companies.
Contents
1
Introduction ................................................................................................................................................................................ 7
1.1
Purpose ............................................................................................................................................................................. 7
1.2
Scope ................................................................................................................................................................................ 7
1.3
Introduction ..................................................................................................................................................................... 8
2.2
2.3
2.4
2.5
2.6
Network .......................................................................................................................................................................... 28
Dell Networking S3048 (1Gb ToR Switch) ................................................................................................................ 28
Dell Networking S4048 (10Gb ToR Switch) ............................................................................................................. 29
Brocade 6510 (FC ToR Switch) ................................................................................................................................... 29
Brocade M5424 (FC Blade Interconnect) ................................................................................................................. 30
PowerEdge M I/O Aggregator (10Gb Blade Interconnect) .................................................................................... 31
3.2
Servers ............................................................................................................................................................................. 32
3.4
Storage ............................................................................................................................................................................ 37
EqualLogic Tier 1 PS6210XS (iSCSI) ........................................................................................................................ 37
EqualLogic Tier 2 PS6210E ...................................................................................................................................... 37
EqualLogic Configuration ............................................................................................................................................ 38
Compellent Tier 1 SC8000 ...................................................................................................................................... 39
Compellent Tier 2 SC8000 or SC4020 ................................................................................................................ 40
Network Attached Storage (NAS) ............................................................................................................................... 41
3.5
GPUs ................................................................................................................................................................................ 42
NVIDIA GRID K1 and K2 ............................................................................................................................................... 42
NVIDIA Tesla M60 ......................................................................................................................................................... 43
3.6
4.3
4.4
5.2
5.3
5.4
5.5
6.2
Acknowledgements ........................................................................................................................................................................ 81
About the Authors ........................................................................................................................................................................... 82
Introduction
1.1
Purpose
This document addresses the architecture design, configuration and implementation considerations for
the key components of the architecture required to deliver virtual desktops via VMware Horizon on
VMware vSphere 6.
1.2
Scope
Relative to delivering the virtual desktop environment, the objectives of this document are to:
1.3
Define relevant risks, issues, assumptions and concessions referencing existing ones where
possible.
Provide a breakdown of the design into key elements such that the reader receives an incremental
or modular explanation of the design.
Whats New
2.1
Introduction
Dell Wyse Datacenter solutions provide a number of deployment options to meet your desktop
virtualization requirements. Our solution is able to provide a compelling desktop experience to a range of
employees within your organization from task workers to knowledge workers to power users. The
deployment options for Dell Wyse Datacenter include:
2.2
Graphics Acceleration
Unified Communications
In the Shared Tier 1 solution model, an additional high-performance shared storage array is added to
handle the execution of the VDI sessions. All compute and management layer hosts in this model are
diskless.
2.3
Solution Layers
The Dell Wyse Datacenter Solution leverages a core set of hardware and software components consisting
of five primary layers:
Networking Layer
Storage Layer
These components have been integrated and tested to provide the optimal balance of high performance
and lowest cost per user. The Dell Wyse Datacenter stack is designed to be cost effective allowing IT
departments to implement high-performance fully virtualized desktop environments.
Networking
Only a single high performance Dell Networking 48-port switch is required to get started in the network
layer. This switch hosts all solution traffic consisting of 1Gb iSCSI and LAN sources for smaller stacks.
Above 500 users we recommend that 10Gb LAN be used with iSCSI traffic separated into discrete
switching Fabrics. Additional switches are added and stacked as required to provide High Availability for
the Network layer.
Compute
The compute layer consists of the server resources responsible for hosting the Horizon user sessions,
hosted via the VMware vSphere hypervisor, local or shared tier 1 solution models (local Tier 1, all-flash,
pictured below).
Management
VDI management components are dedicated to their own layer so as to not negatively impact the user
sessions running in the compute layer. This physical separation of resources provides clean, linear, and
predictable scaling without the need to reconfigure or move resources within the solution as you grow.
The management layer will host all the server VMs necessary to support the VDI infrastructure.
10
Storage
The storage layer consists of options provided by EqualLogic (EQL) for iSCSI, Compellent (CML) arrays for
Fiber Channel or Software Defined to suit your Tier 1 (T1) and Tier 2 (T2) scaling and capacity needs. CML
pictured below.
2.4
Local Tier 1
Base Offerings
2.4.1.1
11
2.4.1.2
12
2.4.2.1
In the Local Tier 1 architecture, a single Dell Networking switch is shared among all network connections
for both management and compute, up to 500 users. Over 500 users Dell Wyse Solutions Engineering
recommends separating the network Fabrics to isolate iSCSI and LAN traffic as well as making each switch
stack redundant. Only the management servers connect to iSCSI storage in this model. All Top of Rack
(ToR) traffic is layer 2 (L2)/ switched locally, with all layer 3 (L3)/ routable VLANs trunked from a core or
distribution switch. The following diagrams illustrate the logical data flow in relation to the core switch.
2.4.2.2
13
2.4.2.3
ToR LAN
EQL T2
EQL NAS
0-1000
S4048
S4048
6210E
0-1000 (HA)
S4048
S4048
6210E
FS7610
0-6000+
S4048
S4048
6210E
FS7610
14
2.4.3.1
15
2.4.3.2
16
2.4.3.3
2.5
User
Scale
Blade LAN
(A Fabric)
Blade iSCSI
(B Fabric)
ToR 10Gb
iSCSI
EQL
T2
IOA
0-1000 (HA)
IOA
IOA
S4048
4210E
0-6000 (HA)
IOA
IOA
S4048
6210E
6000+ (HA)
IOA
IOA
S4048
6210E
17
2.5.1.1
18
2.5.1.2
19
2.5.1.3
ToR LAN
ToR 10Gb
iSCSI
EQL
T1
0-500
500-1000
0-1000 (HA)
S4048
0-3000
EQL
NAS
6210E
S4048
3000-6000
6000+
EQL
T2
6210XS
6210E
FS7610
S4048
20
2.5.2.1
21
2.5.2.2
2.5.2.3
22
User
Scale
LAN
Network
FC
Network
CML
T1
CML
T2
CML
NAS
0-1000
S4048
6510
SSD
0-6000+ (HA)
S4048
6510
SSD
SSD or 15K
FS8600
2.6
2.6.1.1
23
2.6.1.2
24
2.6.1.3
Blade LAN
(B Fabric)
Blade iSCSI
(A Fabric)
ToR 10Gb
iSCSI
EQL
T1
0-500
500-1000
0-1000 (HA)
IOA
IOA
0-6000+
S4048
6210XS
EQL
T2
EQL
NAS
6210E
6210E
FS7610
6210E
FS7610
2.6.2.1
25
26
2.6.2.2
2.6.2.3
Blade LAN
(A Fabric)
Blade FC
(B Fabric)
ToR
FC
CML
T1
0-500
500-1000
IOA
5424
6510
0-6000+ (HA)
27
SSD
CML
T2
CML
NAS
SSD
FS8600
Hardware Components
3.1
Network
The following sections contain the core network components for the Dell Wyse Datacenter solutions.
General uplink cabling guidance to consider in all cases is that TwinAx is very cost effective for short 10Gb
runs and for longer runs use fiber with SFPs.
28
Model
Features
Options
Uses
Dell Networking
S3048-ON
48 x 1000BaseT
4 x 10Gb SFP+
Redundant hot-swap
PSUs & fans
1Gb
connectivity
Non-blocking, line-rate
performance
260Gbps full-duplex
bandwidth
Open Networking
Install Environment
(ONIE)
Features
Options
Uses
Dell Networking
S4048-ON
48 x 10Gb SFP+
6 x 40Gb QSFP+
Redundant hot-swap
PSUs & fans
10Gb
connectivity
Non-blocking, line-rate
performance
1.44Tbps bandwidth
Open Networking
Install Environment
(ONIE)
For more information on the S3048, S4048 switches and Dell Networking, please visit: LINK
29
leading Storage Area Network (SAN) technology while providing pay-as-you-grow scalability to meet the
needs of an evolving storage environment.
Model
Features
Options
Uses
Brocade 6510
48 x 2/4/8/16Gb Fiber
Channel
Additional (optional)
FlexIO module
Up to 24 total ports
(internal + external)
For more information on the Brocade 6510 switch, please visit: LINK
30
Model
Features
Options
Uses
Brocade M5424
Ports on demand
from 12 to 24 ports
For more information on the Brocade M5424 switch, please visit: LINK
Features
Options
Uses
PowerEdge M
I/O Aggregator
(IOA)
Up to 32 x 10Gb
ports + 4 x
external SFP+
2 optional FlexIO
modules
31
For more information on the Dell IOA switch, please visit: LINK
3.2
Servers
PowerEdge R730
The foundation of the Dell Wyse Datacenter solution portfolio is the best-in-class Dell PowerEdge R730.
This dual socket CPU platform runs the fastest Intel Xeon E5-2600 v4 family of processors, can host up to
24 DIMMs of DDR4 RAM, supports up to 16 x 2.5 SAS disks and can be outfitted with 2 double-wide GPU
accelerators from AMD or NVIDIA. The Dell PowerEdge R730 offers uncompromising performance and
scalability in a 2U form factor.
For specific information on VDI optimized configurations from Dell please refer to section 5 of this
document, for more information on the R730, please visit: Link
PowerEdge M630
The blade server platform recommendation for the Dell Wyse Datacenter solution is the PowerEdge M630.
This half-height blade server is a feature-rich, dual-processor platform that offers a blend of density,
performance, efficiency and scalability. The M630 offers remarkable computational density, scaling up to
24 cores, 2 socket Intel Xeon processors(Broadwell) and 24 DIMMs (768GB RAM) of DDR3 memory in an
extremely compact half-height blade form factor.
32
3.3
4 x 800GB SSD
iDRAC8 Enterprise
2 x 750W PSUs
iDRAC8 Enterprise
2 x 750W PSUs
33
34
3.3.3.1
iSCSI
Local Tier 1 Compute Host
PowerEdge R730
3.3.3.2
iDRAC8 Enterprise
iDRAC8 Enterprise
2 x 750W PSUs
2 x 750W PSUs
FC
Fiber Channel is optionally leveraged as the block storage protocol for Compute and Management hosts
with Compellent Tier 1 and Tier 2 storage. Aside from the use of FC HBAs to replace the 10Gb NICs used
for iSCSI, the rest of the server configurations are the same.
Local Tier 1 Compute Host
PowerEdge R730
35
iDRAC8 Enterprise
iDRAC8 Enterprise
2 x 750W PSUs
2 x 750W PSUs
3.3.4.1
The Shared Tier 1 blade server is configured in line with its rack server equivalent. Two network
interconnect Fabrics are configured for the blades: the A-Fabric dedicated to 10Gb iSCSI traffic, the BFabric dedicated to 1Gb LAN.
Local Tier 1 Compute Host
PowerEdge R730
3.3.4.2
FC
Fiber Channel is optionally leveraged as the block storage protocol for Compute and Management hosts
with Compellent Tier 1 and Tier 2 storage. Aside from the use of FC HBAs to replace the 10Gb NICs used
for iSCSI, the rest of the server configurations are the same. Please note that FC is only currently
supported using vSphere.
Local Tier 1 Compute Host
PowerEdge R730
36
3.4
Storage
EqualLogic Tier 1 PS6210XS (iSCSI)
Implement both high-speed, low-latency solid-state disk (SSD) technology and high-capacity HDDs from
a single chassis. The PS6210XS 10GbE iSCSI array is a Dell Fluid Data solution with a virtualized scale-out
architecture that delivers enhanced storage performance and reliability that is easy to manage and scale
for future needs. For more information please visit: LINK
Model
Features
Options
Uses
EqualLogic
PS6210XS
37
Model
Features
Options
Uses
EqualLogic
PS6210E
EqualLogic Configuration
Each tier of EqualLogic storage is managed as a separate pool or group to isolate specific workloads.
Manage shared Tier 1 arrays used for hosting VDI sessions grouped, while managing shared Tier 2 arrays
used for hosting Management server role VMs and user data grouped separately.
38
Simultaneous iSCSI, Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE) front-end
interconnects
Controls SAS and FC drive enclosures, including those with write-intensive and read-intensive
SSDs in SLC, MLC and TLC formats
Scales up to 960 SAS drives in multiple enclosures per dual-controller system and scales-out to
many systems across numerous sites, monitored by a single console
Seamlessly integrates with the FS8600 NAS appliance for fully interoperable block and file storage
solutions
Compellent Tier 1 storage consists of a standard dual controller configuration and scales upward by
adding disks/ shelves and additional discrete arrays. A single pair of SC8000 controllers will support Tier 1
and Tier 2 for up to 2000 knowledge worker users, as depicted below, utilizing all 15K SAS disks. If Tier 2 is
to be separated then an additional 30% of users are added per Tier 1 array. Scaling above this number,
additional arrays will need to be implemented. Additional capacity and performance capability is achieved
by adding larger disks or shelves, as appropriate, up to the controllers performance limits. Each disk shelf
requires 1 hot spare per disk type. RAID is virtualized across all disks in an array (RAID10 or RAID6). Please
refer to the test methodology and results for specific workload characteristics in section 7. SSDs are added
for use in scenarios where boot storms or provisioning speeds are an issue.
39
Controller
Front-End IO
Back-End IO
Disk Shelf
Disks
SCOS
(min)
2 x SC8000
(16GB)
2 x dual-port 8Gb FC
cards (per controller)
2 x quad-port
SAS cards
(per
controller)
2.5 300GB
15K SAS (~206
IOPS each) or
480GB SSD
6.3
Controller
Front-End IO
Back-End IO
Disk Shelf
Disks
2 x SC8000
(16GB)
2 x dual-port 8Gb FC
cards (per controller)
2 x quad-port SAS
cards (per controller)
2 x SC4020
(16GB)
2 x dual-port 8Gb FC
cards (per controller)
2 x quad-port SAS
cards (per controller)
Sample Tier 2 Scaling Guidance (based on 1 IOPS per user + 5GB minimum):
40
Users
Controllers
Disk Shelves
Disks
RAW Capacity
500
7TB
1000
14
14TB
5000
66
66TB
10000
132
132TB
3.4.6.1
Model
EqualLogic
FS7610
Features
Dual active-active
controllers, 24GB cache
per controller (cache
mirroring), SMB & NFS
support, AD-integration.
Up to 2 FS7610 systems in
a NAS cluster (4
controllers).
Scaling
Uses
Options
Uses
3.4.6.2
Compellent FS8600
Model
Compellent
FS8600
Features
Dual active-active
controllers, 24GB cache
per controller (cache
mirroring), SMB & NFS
support, AD-integration.
Up to 4 FS8600 systems
in a NAS cluster (8
controllers).
FC only.
41
3.5
GPUs
NVIDIA GRID K1 and K2
NVIDIA GRID technology offers the ability to offload graphics
processing from the CPU to the GPU in virtualized
environments, allowing the data center manager to deliver true
PC graphics-rich experiences to more users for the first time.
NVIDIA's Kepler-based GRID K1 and K2 boards are specifically
designed to enable rich graphics in virtualized environments.
GPU Virtualization
GRID boards allow hardware virtualization of the GPU. This
means multiple users can share a single GPU, improving user density while providing true PC performance
and compatibility.
Low-Latency Remote Display
NVIDIA's patented low-latency remote display technology greatly improves the user experience by
reducing the lag that users feel when interacting with their virtual machine. With this technology, the
virtual desktop screen is pushed directly to the remoting protocol.
Maximum User Density
NVIDIA GRID boards have an optimized multi-GPU design that helps to maximize user density. GRID K1
boards, which include four Kepler-based GPUs and 16GB of memory, are designed to host the maximum
number of concurrent users. GRID K2 boards, which include two higher end Kepler GPUs and 8GB of
memory, deliver maximum density for users of graphics-intensive applications.
42
Specs
Grid K1
Grid K2
Number of GPUs
Core Clock
850 MHz
745 MHz
16 GB DDR3
8 GB GDDR5
Max power
130 W
225 W
Form Factors
Aux power
6-pin connector
8-pin connector
PCIe
x16 (Gen3)
x16 (Gen3)
Cooling solution
Passive
Passive/ Active
Number of GPUs
43
Tesla M60
3.6
Base Clock
Max power
300W
Form Factors
Aux power
8-pin connector
PCIe
x16 (Gen3)
Cooling solution
Passive/ Active
44
45
(ThinOS) is Citrix HDX, Microsoft RemoteFX, and VMware Horizon View certified. It also supports
legacy peripherals via an optional USB adapter. For more information, please visit: Link
46
47
Dell Chromebook 13
The lightweight, easy-to-use Dell Chromebook 13 helps turn
education into exploration - without the worries of safety or security.
Priced to make 1:1 computing affordable today, Chromebook 13 is
backed by Dell support services to make the most of your budget for
years to come. The Chrome OS and Chrome browser get students
online in an instant and loads web pages in seconds. A high-density
battery supported by a 5th Gen Intel CPU provides up to 12 hours
of power. Encourage creativity with the Chromebook 13 and its
multimedia features that include a 13.3" screen, stereo sound and
webcam. For more information, please visit: Link
3.6.11
48
Software Components
4.1
49
4.2
VMware Horizon
The solution is based on VMware Horizon which provides a complete end-to-end solution delivering
Microsoft Windows virtual desktops to users on a wide variety of endpoint devices. Virtual desktops are
dynamically assembled on demand, providing users with pristine, yet personalized, desktops each time
they log on.
VMware Horizon provides a complete virtual desktop delivery system by integrating several distributed
components with advanced configuration tools that simplify the creation and real-time management of
the virtual desktop infrastructure. For the complete set of details, please see the Horizon resources page at
http://www.vmware.com/products/horizon-view/resources.html
The core Horizon components include:
50
View Connection Server (VCS) Installed on servers in the data center and brokers client connections,
The VCS authenticates users, entitles users by mapping them to desktops and/or pools, establishes secure
connections from clients to desktops, support single sign-on, sets and applies policies, acts as a DMZ
security server for outside corporate firewall connections and more.
View Client Installed on endpoints. Is software for creating connections to View desktops that can be
run from tablets, Windows, Linux, or Mac PCs or laptops, thin clients and other devices.
View Portal A web portal to access links for downloading full View clients. With HTML Access Feature
enabled enablement for running a View desktop inside a supported browser is enabled.
View Agent Installed on all VMs, physical machines and Terminal Service servers that are used as a
source for View desktops. On VMs the agent is used to communicate with the View client to provide
services such as USB redirection, printer support and more.
View Administrator A web portal that provides admin functions such as deploy and management of
View desktops and pools, set and control user authentication and more.
View Composer This software service can be installed standalone or on the vCenter server and provides
enablement to deploy and create linked clone desktop pools (also called non-persistent desktops).
vCenter Server This is a server that provides centralized management and configuration to entire virtual
desktop and host infrastructure. It facilitates configuration, provision, management services. It is installed
on a Windows Server 2008 host (can be a VM).
View Transfer Server Manages data transfers between the data center and the View desktops that are
checked out on the end users desktops in offline mode. This Server is required to support desktops that
run the View client with Local Mode options. Replications and syncing are the functions it will perform
with offline images.
51
4.3
Hypervisor Platforms
VMware vSphere 6
The vSphere hypervisor also known as ESXi is a bare-metal
hypervisor that installs directly on top of your physical
server and partitions it into multiple virtual machines. Each
virtual machine shares the same physical resources as the
other virtual machines and they can all run at the same
time. Unlike other hypervisors, all management
functionality of vSphere is done through remote
management tools. There is no underlying operating
system, reducing the install footprint to less than 150MB.
VMware vSphere 6 includes three major layers:
Virtualization, Management and Interface. The Virtualization layer includes infrastructure and application
services. The Management layer is central for configuring, provisioning and managing virtualized
environments. The Interface layer includes the vSphere web client.
Throughout the Dell Wyse Datacenter solution, all VMware and Microsoft best practices and prerequisites
for core services are adhered to (NTP, DNS, Active Directory, etc.). The vCenter 6 VM used in the solution
is a single Windows Server 2012 R2 VM (Check for current Windows Server OS compatibility at
http://www.vmware.com/resources/compatibility) or vCenter 6 virtual appliance, residing on a host in the
management Tier. SQL server is a core component of the Windows version of vCenter and is hosted on
another VM also residing in the management Tier. It is recommended that View Composer is installed on a
standalone Windows Server 2012 R2 VM when using the vCenter Server Appliance.
For more information on VMware vSphere, visit http://www.vmware.com/products/vsphere
4.3.1.1
52
4.4
53
vGPU Profiles
Virtual Graphics Processing Unit, or GRID vGPU, is technology developed by NVIDIA that enables
hardware sharing of graphics processing for virtual desktops. This solution provides a hybrid shared mode
allowing the GPU to be virtualized while the virtual machines run the native NVIDIA video drivers for better
performance. Thanks to OpenGL support, VMs have access to more graphics applications. When utilizing
vGPU, the graphics commands from virtual machines are passed directly to the GPU without any
hypervisor translation. All this is done without sacrificing server performance and so is truly cutting edge.
The combination of Dell servers, NVIDIA GRID vGPU technology and NVIDIA GRID cards enable highend graphics users to experience high fidelity graphics quality and performance, for their favorite
applications at a reasonable cost.
NOTE: GRID vGPU is supported on NVIDIA K2 cards and is available as a licensed feature on NVIDIA
Tesla M60 cards.
For more information about NVIDIA GRID vGPU, please visit: LINK
NVIDIA Tesla M60 GRID vGPU Profiles:
Card
Tesla
M60
54
vGPU
Profile
Graphics
Memory
(Frame
Buffer)
Virtual
Display
Heads
Maximum
Resolution
Maximum
vGPUs
Guest VM OS
Support*
Per
GPU
Per
Card
Win
64bit
Linux
M60-8Q
8GB
4096x2160
M60-4Q
4GB
4096x2160
M60-2Q
2GB
4096x2160
M60-1Q
1GB
4096x2160
16
M60-0Q
512MB
2560x1600
16
32
M60-1B
1GB
2560x1600
16
GRID
License
Required
GRID Virtual
Workstation
M60-0B
512MB
M60-8A
8GB
M60-4A
4GB
M60-2A
M60-1A
16
32
2GB
1GB
16
2560x1600
1280x1024
GRID Virtual
PC
GRID Virtual
Application
NOTE: Windows guest operating systems supported with all profiles. 64-Bit Linux operating systems
supported are listed in the table on the next page and are only supported by vGPU 2.0 on the M60 cards.
Windows
Linux
Windows 7 (32/64-bit)
Windows 10 (32/64-bit)
Card
Maximum
vGPUs
Memory
Virtual
Display
Heads
Maximum
Resolution
4GB
2560x1600
vGPU
Profile
Graphics
K280Q
GRID
K2
Intended
User(s)
Per
GPU
Per
Card
Designer
Designer
2
K260Q
55
Physical
GPUs
2GB
2560x1600
GRID
K1
K240Q
1GB
2560x1600
Power User
K220Q
512MB
2560x1600
16
Power User
K180Q
4GB
2560x1600
Entry
Designer
K160Q
2GB
2560x1600
Power User
4
K140Q
1GB
2560x1600
16
Power User
K120Q
512MB
2560x1600
32
Power User
For more information about NVIDIA GRID vGPU, please visit: LINK
4.4.1.1
NVIDIA GRID
NVIDIA GRID
NVIDIA GRID
Virtual Applications
Virtual PC
Virtual Workstation
56
For users who need a virtual desktop, For users who need to use professional
but also need a great user experience graphics applications with full
leveraging PC applications, browsers, performance on any device, anywhere.
and high-definition video.
Up to 2 displays supporting
virtualized Windows applications
57
5.1
RAM (GB)
NIC
OS + Data
vDisk (GB)
VMware vCenter
40 + 5
100 (VMDK)
40 5
SQL Server
40 + 5
210 (VMDK)
File Server
40 + 5
2048 (RDM)
Total
28
180
2358
Role
SQL Databases
The VMware databases will be hosted by a single dedicated SQL 2012 SP1 Server VM (check DB
compatibility at: http://partnerweb.vmware.com/comp_guide2/sim/interop_matrix.php?) in the
Management layer. Use caution during database setup to ensure that SQL data, logs and TempDB are
properly separated onto their respective volumes. Create all Databases that will be required for:
Initial placement of all databases into a single SQL instance is fine unless performance becomes an issue,
in which case database need to be separated into separate named instances. Enable auto-growth for each
DB.
Best practices defined by VMware are to be adhered to, to ensure optimal database performance.
The EqualLogic PS series arrays utilize a default RAID stripe size of 64K. To provide optimal performance,
configure disk partitions to begin from a sector boundary divisible by 64K.
Align all disks to be used by SQL Server with a 1024K offset and then formatted with a 64K file allocation
unit size (data, logs and TempDB).
58
DNS
DNS plays a crucial role in the environment not only as the basis for Active Directory but will be used to
control access to the various VMware software components. All hosts, VMs and consumable software
components need to have a presence in DNS, preferably via a dynamic and AD-integrated namespace.
Microsoft best practices and organizational requirements are to be adhered to.
Pay consideration for eventual scaling, access to components that may live on one or more servers (SQL
databases, VMware services) during the initial deployment. Use CNAMEs and the round robin DNS
mechanism to provide a front-end mask to the back-end server actually hosting the service or data
source.
5.1.2.1
5.2
59
Volumes
Size (GB)
Storage Array
Purpose
File System
VDI-1
500
Tier 1
VMFS or NTFS
VDI-2
500
Tier 1
VMFS or NTFS
VDI-3
500
Tier 1
VMFS or NTFS
VDI-4
500
Tier 1
VMFS or NTFS
60
Volumes
Size (GB)
Storage Array
Purpose
File System
Management
350
Tier 2
VMFS
User Data
2048
Tier 2
RDM/NTFS
User Profiles
20
Tier 2
User profiles
VMFS
SQL DATA
100
Tier 2
SQL
VMFS
SQL LOGS
100
Tier 2
SQL
VMFS
TempDB Data
Tier 2
SQL
VMFS
TempDB Logs
Tier 2
SQL
VMFS
SQL Witness
Tier 2
SQL (optional)
VMFS
Templates/ ISO
200
Tier 2
VMFS
61
Specify NICs
5.2.5.1
This queue depth are set to 255 because the ESXi VMkernel driver module and DSNRO can more
conveniently control the queue depth
FC Zoning
Zone at least one port from each server HBA to communicate with a single Compellent fault domain. The
result of this is two distinct FC Fabrics and four redundant paths per server as shown in the diagram below.
Round Robin or Fixed Paths are supported. You can leverage Compellent Virtual Ports to minimize port
consumption as well as simplify deployment. Zone each controllers front-end virtual ports, within a fault
domain, with at least one ESXi initiator per server.
62
5.3
Virtual Networking
Local Tier 1 Rack - iSCSI
The network configuration in this model will vary between the Compute and Management hosts. The
Compute hosts do not need access to iSCSI storage since they are hosting VDI VMs on local disk. Since
the Management VMs are hosted on shared storage, they can take advantage of HA including Live
Migration. The following outlines the VLAN requirements for the Compute and Management hosts in this
solution model:
Following best practices, LAN and block storage traffic is separated in solutions >500 users. This traffic is
combined within a single switch in smaller stacks to minimize the initial investment, however, VLANs are
required for each traffic type to enable traffic separation. Each Local Tier 1 Compute host will have a quad
port NDC which includes both 10Gb and 1Gb interfaces. Configure the LAN traffic from the server to the
ToR switch as a LAG.
vDSwitches should be used as desired for VM traffic especially in larger deployments to ease the
management burden across numerous hosts. In the Local Tier 1 rack model the MGMT hosts connect to
shared storage and require additional VMK ports. Network share values should be configured equally
among the VMKernel port groups that share a physical set of network adapters.
The benefit of using a VMware Distributed Switch (vDS) is that it brings a consistent configuration across all
hosts. The vDS is configured at the vCenter level and provides central management and monitoring to all
hosts configured on the vDS.
63
The Compute hosts are configured in the same basic manner, minus the shared storage, with the desktop
VMs connecting to the primary port group on the external vSwitch.
64
5.3.2.1
vSphere Non HA
Each Local Tier 1 Compute and Management blade host have a 10Gb dual port LOM in the A Fabric that
will flow through 2 x IOA blade interconnects. The B and C Fabrics are left open for optional expansion.
Connections should pass through the blade mezzanines and interconnects per the diagram below.
Configure the LAN traffic from the chassis interconnects to the ToR switch as a LAG, if possible.
65
5.3.2.2
vSphere HA
Following best practices, iSCSI and LAN traffic is physically separated into discrete Fabrics for the HA
design. If mgmt. HA is required, the B Fabric is populated with a 10Gb DP NIC connecting through an
additional pair of IOA blade interconnects. Network share values should be configured equally among the
VMKernel port groups that share a physical set of network adapters.
66
Management VLAN: Configured for hypervisor Management traffic L3 routed via core
switch
o Live Migration VLAN: Configured for Live Migration traffic L2 switched only, trunked
from Core
o iSCSI VLAN: Configured for iSCSI traffic L2 switched only via ToR switch
o VDI Management VLAN: Configured for VDI infrastructure traffic L3 routed via core
switch
A VLAN for iDRAC is configured for all hardware management traffic L3 routed via core switch
o
Following best practices, iSCSI and LAN traffic is physically separated into discrete Fabrics. Each Shared
Tier 1 Compute and Management host has a quad port NDC with 4 x 10Gb SFP+ ports each. iSCSI is
isolated onto its own vSwitch with redundant ports to guarantee storage bandwidth. Configure the LAN
traffic from the server to the ToR switch as a LAG.
The same basic configuration applies to rack or blade servers although the physical NIC and switching
components differ. Network share values should be configured equally among the VMkernel port groups
that share a physical set of network adapters.
67
Shared Tier 1 FC
Using Fiber Channel based storage eliminates the need to build iSCSI into the network stack but requires
additional storage fabrics to be built out. The network configuration in this model is identical between the
Compute and Management hosts. The benefits of shared storage are available to all hosts such as Live
Migration and HA. The following outlines the VLAN requirements for the Compute and Management hosts
in this solution model:
FC and LAN traffic are physically separated into discrete switching Fabrics. Each Shared Tier 1 Compute
and Management host has a quad port NDC (4 x 10Gb) as well as 2 x 8Gb dual port FC HBAs. LAN traffic
from the server to the ToR switch is configured as a LAG.
The same basic configuration applies to rack or blade servers although the physical NIC and switching
components differ. Network share values should be configured equally among the VMkernel port groups
that share a physical set of network adapters.
68
VMware NSX
Dell and VMwares Software Defined Datacenter (SDDC) architecture goes beyond simply virtualizing
servers and storage but also extends into the network. VMware NSX is a network virtualization platform
deployable on any IP network that is integrated with vSphere Virtual Distributed Switching and provides
the same features and benefits to networking as the ESXi hypervisor does to virtual machines. NSX
provides a complete set of logical networking elements and servicesincluding logical switching, routing,
firewalling, load balancing, VPN, quality of service (QoS), and monitoring. These services are provisioned in
virtual networks through any cloud management platform leveraging the NSX APIs. Through Dells open
networking, companies are best able to take advantage of this disaggregation of a virtual network overlay
and an open physical underlay. Building a zero-trust security model is easy with NSX as each virtualized
workload can be protected with a stateful firewall engine providing extreme policy granularity. Any VM in
the datacenter can be rigorously secured or isolated if compromised, especially useful for virtual desktops
to prevent malicious code from attacking and spreading through the network.
VMware NSX is implemented via a layered architecture consisting of data, control and management
planes. The NSX vSwitch exists within and requires the vSphere Distributed Switch to abstract the physical
network while proving access-level switching in the hypervisor. NSX enables the use of virtual load
balancers, firewalls, logical switches and routers that can be implemented and scaled seamlessly to suit
any deployed architecture. VMware NSX compliments Dell Networking components deployed ToR,
leaf/spine or at the core.
69
Accelerated
Innovation
Open
Networking
Platform
All Dell Networking data center switches support the Open Network Install
Environment (ONIE), allowing customers to choose between multiple
operating systems and meet their unique needs.
Hardware
Layer 2 gateway through VXLAN Tunnel End Points (VTEP) bridges virtual and
VTEP Gateway physical infrastructures.
Virtual
Switching
70
VXLAN based network overlays enable logical layer 2 overlay extensions across
a routed (L3) fabric within and across data center boundaries.
Virtual
Routing
Distributed
Firewalling
Load
Balancing
L4-L7 load balancer with SSL offload and pass through, server health checks,
and App Rules for programmability and traffic manipulation.
For more information on VMware NSX and integrated offers from Dell Networking please see the Dell
Networking Solution Brief and the Reference architecture.
5.4
Scaling Guidance
The components are scaled either horizontally (by adding additional physical and virtual servers to
the server pools) or vertically (by adding virtual resources to the infrastructure)
Eliminate bandwidth and performance bottlenecks as much as possible
Allow future horizontal and vertical scaling with the objective of reducing the future cost of
ownership of the infrastructure.
Component
71
Metric
Horizontal Scalability
Vertical Scalability
Virtual Desktop
Host/Compute Servers
View Composer
Additional physical
servers added to the
Management cluster to
deal with additional
management VMs.
View Connection
Servers
Additional physical
servers added to the
Management cluster to
deal with additional
management VMs.
Additional VCS
Management VMs.
VMware vCenter
Deploy additional
servers and use linked
mode to optimize
management
Additional vCenter
Management VMs.
5.5
Database Services
Concurrent
connections,
responsiveness of reads/
writes
Migrate databases to a
dedicated SQL server
and increase the
number of management
nodes
File Services
Concurrent
connections,
responsiveness of reads/
writes
72
The HA options provide redundancy for all critical components in the stack while improving the
performance and efficiency of the solution as a whole.
5.5.1
Additional switches added to the existing thereby equally spreading each hosts network
connections across multiple switches.
Additional ESXi o hosts added in the compute or mgmt layers to provide N+1 protection.
A number of enhancements occur at the Management tier, the first of which is the addition of
another host. The Management hosts will then be configured in an HA cluster. All applicable
Horizon server roles can then be duplicated on the new host where connections to each will be
load balanced via the addition of a F5 Load Balancer. SQL will also receive greater protection
through the addition and configuration of a SQL mirror with a witness.
73
5.5.2
Manage
10000 VMs
A single HA cluster will be sufficient to support the Management layer up to 10K users. An additional host
can be used as a hot standby or to thin the load across all hosts in the cluster.
74
5.5.3
5.5.4
5.5.5
5.5.6
75
76
5.6
77
6.1
Minimum Requirement
Data Networking
Notes
1Gb/ 10Gb iSCSI or FC storage required
on NL SAS disks minimally.
6-30 IOPS per user may be required
depending on workload. T1 storage
should be capable of providing user IOPS
requirement concurrently to all hosted
users.
File share usage and size of deployment
may shift this requirement.
Data networking traffic should be isolated
on dedicated NICs and HBAs in each
applicable host.
78
10, 6
6.2
Minimum Requirement
Notes
10Gbps Ports
Uplink to Core
10Gbps Ports
1Gbps Ports
1Gbps Ports
VLAN Support
Switching Capacity
Stacking Capability
79
Yes
Operating System
Quantity
Standard
Windows 8.1
330
Enhanced
Windows 8.1
238
Professional
Windows 8.1
182
Standard
Windows 10
250
Enhanced
Windows 10
170
Professional
Windows 10
140
For detailed validation results and analysis of these reference designs and more, please visit: LINK
80
Acknowledgements
Thanks to David Hulama of the Wyse Technical Marketing team for his support and assistance with
VMware data center EUC programs at Dell. David is a Senior Technical Marketing Advisor for VMware VDI
solutions at Dell. David has a broad technical background in a variety of technical areas and expertise in
enterprise-class virtualization solutions.
Thanks to Yinglong Jiang and Cormac Woods for their help in the validation of the solution in this RA.
Yinglong is a Senior Solution Engineer in Dell Wyse Datacenter engineering team, primarily responsible for
testing the VDI solutions based on Dell solutions with a particular focus on software define storage and
hyper-converged data center infrastructure.
Cormac Woods is a Sr. Systems Engineer in the Desktop Virtualization solutions Group at Dell. Cormac has
over 20 years of experience in IT and Telecoms product engineering. In addition Cormac has a deep
background in IT systems support in financial and manufacturing environments. In his current role, he has
developed many of the solutions provided by the Dell Wyse Datacenter team using VMware Horizon as
well as other VDI software environments on multiple Dell server platforms.
81
82