Académique Documents
Professionnel Documents
Culture Documents
Reference Architecture
T E C H N I C A L W H I T E PA P E R
Table of Contents
Executive Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Workload Test Result Highlights. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
VMware Reference Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Horizon 6 Solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Hardware Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Software Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
VMware vSphere. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
VMware Horizon 6 with View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Horizon 6 Reference Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Horizon Pod. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Management Block. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Desktop Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Networking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Storage Sizing for Server Workloads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Storage Sizing for Desktop Workloads. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Virtual Desktop Storage Workload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Virtual Desktop Storage Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
ESXi Hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
CPU Sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Virtual Desktop Memory Sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
RDSH Memory Sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
VMware vCenter Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
VMware vSphere Clusters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Virtual Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
VMware vRealize Operations for Horizon. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
T E C H N I C A L W H I T E PA P E R
/ 2
Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Virtual Desktop Machine Image Build. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Remote Desktop Services Host Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
User Experience. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Blast Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
PCoIP Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Persona and User Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Desktop Persistence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Integration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Active Directory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
VMware SQL Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Windows File Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Availability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Test Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Functional Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Run Phases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Quality of Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
T E C H N I C A L W H I T E PA P E R
/ 3
Executive Summary
This reference architecture provides guidance for implementing a VMware Horizon 6 deployment that supports
2,00010,000 users with an existing server and storage infrastructure. Although hardware is specified for 2,000
users, you can scale the deployment up to 10,000 users using the pod and block architecture approach.
This reference architecture combines the technologies of standard rack mount server hardware running on
EMC VNX storage leveraging View Storage Accelerator (to accelerate existing SAN) with VMware ESXi 5.5
and Horizon 6 software to produce a highly efficient, robust, and scalable next-generation virtual workspace
deployment. This document includes information on View, VMware Mirage 5.0, and VMware Workspace
Portal 2.1 running on top of VMware vSphere 5.5.
Consolidation Ratio
Consolidation Ratio
Passed
Passed
Access from
Workspace Portal
Passed
14 Minutes
This document describes how to size and configure a solution that encompasses View, Mirage, and
Workspace Portal, as well as the VMware vCenter and vSphere core technologies. You can provision, manage,
and access hosted applications and virtual desktops from a single place quickly and efficiently. The example
solution supports 1,000 hosted application users, 800 stateless virtual desktop users, and 200 persistent virtual
desktop users. As part of the architecture validation, VMware performed functional, operational, and workload
tests to highlight how the entire software stack integrates to provide a complete virtual workspace solution.
T E C H N I C A L W H I T E PA P E R / 4
T E C H N I C A L W H I T E PA P E R / 5
The ESXi 5.5 host averaged 78 percent memory usage, with a peak of 79 percent throughout View Planner
testing. The four Windows 2012 RDSH servers averaged 40 percent memory usage, with a peak of 59 percent.
T E C H N I C A L W H I T E PA P E R / 6
Horizon 6 Solution
The Horizon 6 virtual workspace solution combines the best-of-breed data center and desktop virtualization
technologies.
The high-level infrastructure consists of
ESXi hosts with a 2.1 GHz Intel E5-2658 or 2.9 GHz E5-2690 processor
128 GB RAM per ESXi host
EMC VNX5500based NFS storage (20 TB)
10 Gigabit Ethernet (GbE) networking
Windows 7 virtual machines with one vCPU and 1 GB vRAM
Microsoft Remote Desktop Session Host (RDSH) virtual machines with four vCPUs and 24 GB RAM
T E C H N I C A L W H I T E PA P E R / 7
Thin Client
Mac OS
PC
Horizon
Clients
Kiosk
iOS/Android
Mirage
Mgmt
Mirage
Servers
MS
SQL
Active
Directory
SSD
SSD
RDSH Cluster
Desktop Cluster
Management Cluster
HTTPS/PCoIP
DMZ (HTTPS/PCoIP)
PCoIP
ESX, vCenter, View, Mirage, AD traffic
NFS Storage
T E C H N I C A L W H I T E PA P E R / 8
Hardware Components
This section provides an overview of the hardware components of the architecture.
VDI &
RDSH VMs
Management Hosts
1 x Supermicro 2027TR Chassis
3 x Supermicro X9DRT-HF
System Boards
16 Cores, 128 GB RAM
- Horizon 6 Server Workload VMs
Figure 5: Hardware Components
Server
Supermicro SuperServer provides four hot-pluggable nodes in a 2U form factor. The system is ideal for running
virtualized and cloud computing environments in a highly dense form factor.
The Supermicro SuperServer system includes the following components:
Intel Xeon ES-2600 and ES-2600 v2 processor family
128 GB DDR3 ECC registered memory
Two 300 GB SSDs
Intel 82599EB 10 GB SFI/SFP+ dual-port interconnection for connectivity
Network
The Extreme Summit x670 series switches are versatile, purpose-built, top-of-rack switches that support the
emerging 10GbE-enabled servers in enterprise and cloud data centers.
Benefits include
High-density 10GbE switching in a small 1U form factor
Scalable, with up to 48 ports in a single system and up to 352 ports in a stacked system
Enterprise-ready High-availability ExtremeXOS operating system provides simplicity and ease of operation
by using a single OS throughout the network
T E C H N I C A L W H I T E PA P E R / 9
Storage
All virtual desktops, virtual RDSH servers, management server virtual machines, user profiles, user data, and
Mirage storage use the EMC VNX5500 model for NFS storage. VNX5500 can hold 250 drives, scalable up to
480 TB. It has up to 12 GB system memory at the block level, with support for Fibre Channel (FC), iSCSI, and FC
over Ethernet (FCoE) connectivity. VNX5500 is suitable for those who want to take advantage of enterpriselevel storage at a lower TCO.
Note: This reference architecture assumes that the existing server platform, whether it is blade or rack-mount
server, cannot accommodate the VMware Virtual SAN hardware requirements, and therefore will use VNX
as the storage solution. Virtual SAN is a viable solution for Horizon 6. For more information, see the VMware
Horizon with View and Virtual SAN Reference Architecture.
Software Components
This section provides an overview of the software components of the architecture.
VMware vSphere
VMware vSphere is the industry-leading virtualization platform for building cloud infrastructures. It enables
users to run business-critical applications with condence and respond quickly to business needs.
VMware vSphere accelerates the shift to cloud computing for existing data centers and underpins compatible
public cloud offerings, forming the foundation for the industrys best hybrid cloud model.
VMware Horizon 6 with View
Horizon 6 delivers hosted virtual desktops and applications to end users through a single platform. These
desktop and application servicesincluding RDSH applications, packaged applications with VMware ThinApp,
software-as-a-service (SaaS) applications, and even virtualized applications from Citrixcan all be accessed
from one unified workspace across devices, locations, media, and connections. Leveraging closed-loop
management and optimized for the software-defined data center, Horizon helps IT control, manage, and
protect the Windows resources that end users want at the speed they expect and with the efficiency that
business demands.
Horizon 6 also provides the ability to manage both virtual and physical desktop images using VMware Mirage.
Mirage allows you to manage persistent, full-clone desktops.
Horizon 6 allows users to access desktops and applications via VMware Workspace Portal. Workspace Portal
also provides IT a central place to entitle and deliver Windows applications, desktops, SaaS applications,
ThinApp packaged applications, and XenApp applications to users.
T E C H N I C A L W H I T E PA P E R / 1 0
ThinApp Repository
VMware Workspace
Portal VA
Core Infrastructure
Active
Directory
VMware Mirage
VMware Mirage
Servers
View
View Security
Server
Windows 7
Full Clone
Physical/Containerized
Desktops
vRealize
Operations
Manager
vCenter
Server
Windows 7
Linked Clone
View Connection
Server
View
Composer
Windows 7
3D Desktop
RDSH-Hosted
Desktops and Applications
T E C H N I C A L W H I T E PA P E R / 1 1
Horizon 6 Pod
~1,000 Desktops
~1,000 RD Sessions
Desktop Block
Desktop Cluster
Desktop Cluster
View RDSH
Desktop Pools
Shared Storage
Mirage
Server
vRealize Operations
Manager
Mirage
Management
Server
Server Cluster
View Security
Server
Workspace
Portal
T E C H N I C A L W H I T E PA P E R / 1 2
Horizon Pod
A Horizon pod is a logical administrative entity that can support up to 10,000 users or sessions. You can
increase that limit to 20,000 users or sessions using 24 pods. A pod contains a management block and one or
more desktop blocks. In this reference architecture, the pod supports 2,000 users or sessions.
Management Block
The management block contains all the Horizon server virtual machines.
In customer production deployments, VMware vCenter Server is typically deployed for every 2,000 virtual
desktops. VMware supports up to 10,000 desktop virtual machines in a single vCenter instance, but keeping to
2,000 desktops improves power and provisioning operation times.
VMware supports a maximum of 2,000 concurrent sessions per View Connection Server. An additional View
Connection Server is deployed for redundancy (n+1). Two additional View Connection Servers are paired with
View security servers to provide secure, redundant external access to View desktops. Each security server can
handle up to 2,000 connections.
A single Workspace Portal virtual appliance can scale to extremely high numbers (30,000 users); therefore
we recommend deploying a single instance. You can add virtual appliances for each component to provide
redundancy.
A single Mirage server can handle up to 1,500 managed desktops. You can use multiple Mirage servers to
provide redundancy. A Mirage Management server is also required to manage the Mirage servers and desktop
operations.
A single vRealize Operations Manager virtual appliance can handle up to 10,000 virtual desktops.
You can easily scale out each management component to support 10,000 users within a Horizon pod.
T E C H N I C A L W H I T E PA P E R / 1 3
The management block has a single vSphere cluster that supports the Horizon server virtual machines shown in
Figure 8.
vSphere Cluster
vCenter Server
View
Composer
2x View
Security Server
2x Workspace Portal
VA
2x View (Int.)
2x View (Ext.)
Connection Server Connection Server
vRealize Operations
Manager UI VA
2x Mirage
Server
Mirage
Management
Server
vRealize Operations
Manager Analytics VA
SQL Server
Active
Directory
2x 2 TB LUN
EMC VNX5500 NFS
Desktop Blocks
In a standard View reference architecture design, a desktop block, delineated by a dedicated vCenter instance,
supports 2,000 concurrent sessions. You can architect multiple desktop blocks within a pod to support up to
10,000 concurrent sessions.
In this reference architecture, the desktop block supports 2,000 sessions1,000 virtual desktops and 1,000 RD
sessions, running on virtual RDSH servers.
The desktop block contains two vSphere clusters to isolate the differentiated workloads of hosted virtual
desktop instances from the RDSH server instances. One cluster supports 800 linked-clone and 200 full-clone
Windows 7 virtual desktops across 11 ESXi hosts. The other cluster supports 32 RDSH virtual machines on 9 ESXi
hosts, sized to support approximately 1,000 hosted application sessions running between 46 applications.
T E C H N I C A L W H I T E PA P E R / 1 4
Linked-clone desktop workloads and RDSH virtual machines are stored on the VNX5500 presented as an NFS
datastore. Linked-clone desktops and RDSH servers are part of a pool of resources. If a host fails, users can be
quickly connected to an alternative desktop or server on another host. Shared storage also allows linked-clone
desktops and RDSH servers to be quickly recovered and run on another host in the cluster.
Full-clone desktops are also deployed on the VNX5500 NFS-based datastore. Using shared storage reduces the
impact of potential host failures for dedicated persistent desktop users.
Management Block
vCenter
ESXi Desktop
Cluster
Windows 7
Full-Clone Pool
200 Desktops
ESXi Desktop
Cluster
6x 2 TB LUN
EMC VNX5500
NFS
4x 1 TB LUN
EMC VNX5500
NFS
Windows 7
Linked-Clone Pool
800 Desktops
32x Remote
Desktop Services
Host
T E C H N I C A L W H I T E PA P E R / 1 5
Horizon 6
vSGA /
vDGA
CBRC
SE
Sparse
Disk
Linked
Clones
VAAI
Virtual
SAN
Network
Specifications
VMware vSphere
Availability
Application
Services
vMotion
Storage vMotion
HA
Fault Tolerance
Data Recovery
vCompute
Infrastructure
Services
Security
vShield Zones
VMsafe
vStorage
VMFS
Thin Provisioning
Storage I/O Control
Scalability
DRS
Hot Add
vNetwork
Distributed Switch
Network I/O Control
Horizon benefits from proven vSphere features, such as a distributed resource scheduler, high availability,
VMware VMsafe, distributed vSwitch, thin provisioning, transparent page sharing, and memory compression.
Horizon also takes advantage of and integrates with several unique features within vSphere 5.5, including
View Storage Accelerator Host-based memory cache of the most commonly read disk blocks to help reduce
read I/O storms during boot or login events
Linked clones Single image management and storage optimization to reduce the desktop storage requirement
Space-efficient (SE) sparse disks Reclamation of unused disk blocks in linked clones, providing the ability to
manage the growth of linked clones over time
GPU virtualization Support for a wide range of 3D-based use cases, using both shared (vSGA) and
dedicated (vDGA) GPU virtualization
vSphere Storage APIs Array Integration Ability to offload virtual machine provisioning operations to a
storage array
Virtual SAN Storage layer abstraction and virtualization by pooling local storage resources into a virtual
shared storage array
In addition, Horizon can be managed and monitored using vCenter Server and vRealize Operations for Horizon.
T E C H N I C A L W H I T E PA P E R / 1 6
Networking
The physical networking infrastructure is standardized on 10GbE. Each host includes a dual port 10GbE card
and a dual port 1GbE card. Each host is connected to a 10GbE Extreme Summit x670 Ethernet switch in its
associated rack. Each Extreme x670 switch is connected to a core 10GbE switch, providing connectivity across
racks. See the Virtual Networking section for more information on virtual machine networking. Configuring a
third-party firewall and load balancing are out of the scope of this reference architecture.
Storage
This reference architecture leverages an existing EMC VNX5500 storage system to host all linked-clone and fullclone desktops, RDSH servers, server workloads, user profiles, user data, and Mirage storage. Local solid-state
drives (SSD) were not used, but could be, for example, to host RDSH server workloads.
In any virtual desktop deployment, it is critical to use storage acceleration technologies for desktop
performance. Storage acceleration technologies include read/write cache, inline deduplication, I/O optimization,
I/O compression, and storage tiering. Storage acceleration can occur as part of the hypervisor or as part of the
storage solution. To reduce the read I/O requirements on the VNX, View Storage Accelerator caches read I/O
locally on the ESXi host. To reduce the capacity requirement for linked clones, the SE sparse disk format is used
to reclaim unused disk blocks.
Software-defined storage solutions, such as VMware Virtual SAN, can also reduce the impact on or need for
legacy SAN devices by performing acceleration at the ESXi host. Virtual SAN is a viable storage platform for
Horizon and many of the workloads described in this reference architecture. However, this architecture did not
use Virtual SAN to demonstrate how to use existing server platforms that might not support the Virtual SAN
hardware requirements. For more information on Horizon with View running on Virtual SAN, see the VMware
Horizon with View and Virtual SAN Reference Architecture.
10GbE
4x 1 TB
RD Session
Hosts
2x 300 GB
SSD
10GbE
2x 2 TB
All Servers
1x 490 GB ISO
1x 1 TB TEMP
6x 2 TB
Full Clones,
Linked Clones,
Linked-Clone
Replicas
EMC VNX5500
NFS
EMC VNX5500
NFS
T E C H N I C A L W H I T E PA P E R / 1 7
Based on the powerful new family of Intel Xeon 5600 processors, the EMC VNX5500 implements a modular
architecture that integrates hardware components for object-based storage with concurrent support for native
network-attached storage, iSCSI, FC, and FCoE protocols. The series delivers file functionality via 28 X-blade
data movers and block storage via dual storage processors leveraging full 6Gb SAS disk drive topology.
The EMC VNX5500 has 20 TB of usable disk available. ISO (490 GB) and temp (1 TB) datastores are presented
to ESXi hosts across both clusters. In this reference architecture, VNX is configured to present two 2 TB
datastores via NFS to all hosts in the management cluster. It is also configured to present six 2 TB datastores via
NFS to all hosts in the desktop cluster and four 1 TB datastores to all hosts in the RDSH cluster.
Both the 2 TB and 1 TB datastores provide about 3,000 IOPS, based on the number of disks provided per
datastore. VNX caching features increase the number of IOPS that each datastore can deliver. Work with
your storage vendor to understand the datastores configuration, sizing, and IOPS capability. Keep in mind
the following sizing and performance calculations and that you need to size for peak average IOPS. When
consulting the storage vendor, ensure that the front-end IOPS requirement and the RAID-level impact on the
backend IOPS are understood.
Storage Sizing for Server Workloads
All server workloads running in the management block are hosted on the EMC VNX5500 array. The solution
uses 22 server virtual machines, vSphere components, and infrastructure services.
The server workloads require about 2 TB of disk for virtual machine disk format (VMDK) files. Each server
workload also requires swap files. The size of the swap file is equivalent to the amount of memory allocated to
the virtual machine. Virtual machine swap files total 272 GBno memory reservation is used. With an additional
20 percent overhead, the total disk requirement is 2.83 TB. The VNX presents two 2 TB NFS datastores to each
host in the management cluster, with room to add additional server workloads as necessary.
Storage Sizing for Desktop Workloads
Storage plays an important role in desktop performance and the user experience. The following tables provide
sample calculations for working out the capacity and performance requirements for datastores hosting desktop
workloads. The tables do not take specific storage optimization or acceleration technologies into consideration.
Consult your storage vendor to validate desktop storage sizing.
In many implementations, it is more important that the limit on the number of virtual machines per datastore
be influenced by the I/O requirements of the virtual machine and the spindle types. When considering the
number of virtual machines to place on a single datastore, consider the following factors in conjunction with
any recommended virtual machines per datastore ratio:
Types of disks used (SATA, SAS, SSD)
Typical virtual machine size (including configuration files, logs, swap files, snapshots)
Virtual machine workload and profile (specifically, the IOPS)
The following table shows the IOPS for two different types of disks, which affects the overall number of disks
required per datastore.
D I S K TY P E
SIZE
IOPS
15 K RPM SAS
600 GB
~150
SSD
300 GB
~1,500+
T E C H N I C A L W H I T E PA P E R / 1 8
VALU E
168
10
20
Total throughput
(80% normal user, 20% heavy user)
Note: Based on the read/write I/O split, the worst case during steady statenot boot or login stormis 6293
IOPS per datastore. The best case is 2639 IOPS per datastore.
You can use the total IOPS to calculate the number of disks required to back the datastore. For example, based
on the IOPS capability of the disks, between 1843 SAS hard-disk drives (HDD) would be required as compared
to just one or two SSDs. This number does not take storage caching or acceleration into account.
You can calculate RDSH workloads in a similar manner. The IOPS per RDSH user session can be between 310
for steady state.
T E C H N I C A L W H I T E PA P E R / 1 9
S P ECIFICATION
DESCR IPTION
Total number of
datastores (based on
capacity)
1 per 34 virtual
machines
11
The following table lists the datastore sizing calculations for linked clones.
ATTRI BUTE
S P ECIFICATION
DESCR IPTION
64128 VMFS
140 with VAAI
250+ NFS
At least 376 GB
T E C H N I C A L W H I T E PA P E R / 2 0
ATTRI BUTE
S P ECIFICATION
DESCR IPTION
Total number of
datastores (based on
capacity)
11 hosts per
datastore
The following table lists the datastore sizing calculations for RD Session Hosts.
ATTRI BUTE
S P ECIFICATION
Number of OS disks
per datastore
DESCR IPTION
Total number of
datastores (based on
capacity)
1 per 8 virtual
machines
T E C H N I C A L W H I T E PA P E R / 2 1
ESXi Hosts
This architecture uses standard rack mount servers with dual socket, 8-core, 2.1 GHz or 2.9 GHz CPUs, and
128 GB RAM running ESXi version 5.5. The desktop and RDSH workloads use the 2.9 GHz hosts, and the
management workloads use the 2.1 GHz hosts.
The hosts are split into three clusters. The management cluster uses 3 hosts, the virtual desktop cluster uses 11
hosts, and the RDSH workload cluster uses 9.
VMware has conducted a number of performance and system tests to validate the scalability of View in terms
of desktop workloads. The results were used to size the hosts for this reference architecture. To determine
sizing calculations, it is recommended to assess your user workloads and CPU, memory, and disk I/O
requirements.
In this reference architecture, virtual desktop users are considered normal office workers, and RDSH users are
considered light office workers (five common applications).
CPU Sizing
Based on VMware testing, experience from field deployments, and industry analysis of RDSH sizing, this
reference architecture uses the recommended specification of four vCPU virtual RD Session Hosts with no CPU
overcommit.
This specification means that a 2-CPU, 8-core host with 16 physical cores can support up to 4 vCPU RD Session
Hosts on a single ESXi server. Our testing indicates that we can expect approximately 30 light office worker
sessions per RDSH.
1 x 4 vCPU virtual RD Session Host per 4 CPU cores / 16 cores = 4 RDSH per
ESXi host with 30 sessions per RDSH (120 sessions per ESXi host)
T E C H N I C A L W H I T E PA P E R / 2 2
VMware testing and field experience shows that customers can expect anywhere from 510 1 vCPU virtual
desktops per physical core. For a normal office worker workload, we are using eight 1 vCPU virtual desktops per
core.
8 x 1vCPU virtual desktops per CPU core * 16 cores * 80% (max. CPU) = 100
virtual desktops per host
D E S KTO P P E RFO RM ANCE METR IC
350 MHz
vCPU overhead
10%
ATTRI BUTE
SPECIFICATION
2.9 GHz
23.2 GHz
46.4 GHz
80%
37.12 GHz
T E C H N I C A L W H I T E PA P E R / 2 3
SPECIFICATION
1024 MB
Memory reservation
Resolution
41 MB
104 GB
128 GB
10% saving
10%20%
80%
128 GB
T E C H N I C A L W H I T E PA P E R / 2 4
SPECIFICATION
512 MB
30
16 GB
41 MB
64 GB
24 GB
97 GB
80%
128 GB
T E C H N I C A L W H I T E PA P E R / 2 5
OS
vCPU
4 vCPUs
vRAM
24 GB
Storage
100 GB
NUMB ER OF HOSTS
DESCR IPTION
Management
Desktop
11
RDSH
T E C H N I C A L W H I T E PA P E R / 2 6
Virtual Networking
In typical customer deployments, a vSphere implementation uses three types of network connections: virtual
machine, management network, and VMkernel. Each type connects to a virtual switch that has one or more
physical adaptersat least two adapters are required for resilienceto provide connectivity to the physical
networks.
Storage
External
Workloads
RDSH
Virtual
Desktop
Server
Workloads
EMC VNX
NFS
Management
vMotion
VMNet-172
VMNet-10
vmk
dvSwitch1
10GbE
NICs
The Horizon environment has a distributed vSwitch (dvSwitch) to handle ESXi management, Horizon
workloads, NFS, and VMware vSphere vMotion. The dvSwitch uses dual port NICs connected to redundant
switches, providing resiliency across network adapters.
T E C H N I C A L W H I T E PA P E R / 2 7
The port groups and VLANs created on each ESXi host are shown in the following figure.
T E C H N I C A L W H I T E PA P E R / 2 8
Desktop VMs
Database Server
V4H Desktop
Agent
V4H Desktop
Agent
View Events
Database
Desktop metrics
(PCoIP, CPU, memory,
disk, session info)
View topology
and events
vCenter
Server
vSphere metrics
(ESXi, VM, datastore,
data center)
vRealize Operations
Manager 5.7 vApp
View Adapter
View Adapter 1
View Dashboards
T E C H N I C A L W H I T E PA P E R / 2 9
Components
VMware vRealize Operations for Horizon consists of two SUSE Linux Enterprise 11 (64-bit) virtual appliances
that support 1,000 virtual desktops. The analytics appliance collects data from vCenter Server, VMware vCenter
Configuration Manager, and third-party sources, such as metrics, topology, and change events. Raw data is
stored in a scalable file system database (FSDB). The Web UI appliance allows you to access the results of the
analytics and the Administration Portal to perform management tasks.
ATTRI BUTE
WEB U I APPLIAN CE
AN ALYTICS APPLIAN CE
OS
vCPU
4 vCPUs
4 vCPUs
vRAM
11 GB
14 GB
Storage
100 GB
800 GB
Configuration
VMware vRealize Operations for Horizon is configured as described in the installation guide with no additional
modifications. After deploying the virtual appliance, the configuration steps are
1. On the Admin Web console Update tab, deploy the vCenter Operations Manager for Horizon PAK file to
add the custom dashboards.
2. Log in to vCenter Operations Manager for Horizon and create the adapter instance.
3. Select the full metric set and set pairing credentials for the broker agent.
4. Install the broker agent on a View Connection Server.
T E C H N I C A L W H I T E PA P E R / 3 0
For most environments, it is necessary to dedicate a View Connection Server for the broker agent.
Figure 16: VMware vRealize Operations for Horizon Broker Agent Configuration
T E C H N I C A L W H I T E PA P E R / 3 1
https://myworkspace.company.com
https://myworkspace.company.com
Thin Client
Mac OS
PC
Horizon
Clients
Kiosk
iOS/Android
Workspace Portal VA x 2
ThinApp
Repository
Oracle/vPostgres
Database
View
Connection
Servers
View RDSH
Apps and Desktops
Active
Directory
View Virtual
Desktops
Management Cluster
HTTPS
HTTPS (DMZ)
View, Workspace Portal Traffic
T E C H N I C A L W H I T E PA P E R / 3 2
Access to Workspace Portal is via HTTPS, from anywhere, including from within a View or RDSH desktop.
Workspace Portal supports both internal and external access. The user is connected to Workspace Portal to
access their applications and desktops. All Workspace Portal components sit within the internal network.
When launching a View desktop, RDSH desktop, or hosted application, Workspace Portal launches the Horizon
Client if it is available. Alternatively, HTML5 protocol can be used to access View desktops if a
Horizon Client is not installed.
You can use a third-party load balancer to provide highly available access to multiple Workspace Portal virtual
appliances. Do not deploy the Workspace Portal virtual appliance in the DMZ.
Components
Workspace Portal 2.1 is composed of a single virtual appliance that can be duplicated for scaling purposes.
Workspace Portal VA
API
Workspace Portal VA
Workspace Portal VA
Services
API
API
Core Services
Workspace Portal
VA
tcserver
Services
API
Connector Services
Services
tcserver
Connector
tcserver
DB (vPostgres)
DB
OS (SLES)
DB
Connector
OS (SLES)
DB
tcserver
OS (SLES)
Connector
OS (SLES)
Workspace Portal virtual appliance enables a single, user-facing domain for access to Workspace Portal
for both user and administrators. The Workspace Portal virtual appliance is the single point of entry for all
purposes. It contains all the components for integrating with Horizon with View or third-party solutions.
vCPU
VI RTUAL
AP P LI ANC E
S I Z I NG
R AM
8 GB
HDD
72 GB
Table 12: Sizing for a Single Workspace Portal VA for 30,000 Users
T E C H N I C A L W H I T E PA P E R / 3 3
Configuration
Workspace Portal virtual appliances get their time from the ESXi hosts that they are running on. Before
installing Workspace Portal virtual appliances, make sure that the time settings across all ESXi hosts are
accurate and have no skew because this can affect the Security Assertion Markup Language (SAML)
configuration.
SAML 2.0 authentication is configured across the participating View Connection Servers. After SAML 2.0
authentication is configured, the View Connection Servers are added to the connector virtual appliance used for
synchronization operations.
The View Client Access URL is configured in the Workspace Admin Console interface (Network Ranges) to point
to the load balancer in front of the participating View Connection Servers so that all traffic is load balanced.
The virtual appliance used for single sign-on (SSO) via Kerberos is joined to the domain, and Windows
authentication is enabled on the administrative interface, providing users a seamless experience without
prompts when accessing resources.
T E C H N I C A L W H I T E PA P E R / 3 4
PCoIP (Direct)
UDP 4172
Active
Directory
Security
Servers
File/Print/ThinApp
Connection
Servers
vSphere
Admin
RDSH
Servers
View
Composer
View
Administrator
Console
Desktop
Admin
Virtual
Desktops
SQL
vRealize Operations
for Horizon
VMware ESXi
vCenter
VMware ESXi
vSphere
Client
View Connection Server handles authentication to Active Directory and then brokers a connection to a virtual
desktop, RDSH desktop, or hosted application using either PCoIP or HTML5 if using a Web browser.
For external users, PCoIP traffic is forwarded by the View security server to the desktop session. For internal
users, the client is connected directly to the desktop session.
If a desktop is not available, View Connection Server can provision additional desktops automatically via
vCenter Server. Entitling users or a group to preconfigured pools of desktops in View Administrator enables
automatic provisioning. View Composer minimizes storage requirements by using linked clones for virtual
desktops.
View easily scales by adding more View Connection Servers or security servers. Each View Connection
Server can handle up to 2,000 concurrent connections. Additional View Connection Servers also provide high
availability.
T E C H N I C A L W H I T E PA P E R / 3 5
View Connection Servers and security servers are installed in the management block. A Horizon pod can
support up to seven View Connection Servers, not to exceed 10,000 concurrent sessions. Up to four View
security servers per View Connection Server are permitted.
It is recommended to deploy one vCenter Server per desktop block, along with a single instance of View
Composer. View Composer can be installed on vCenter Server or be standalone.
Components
View consists of the following components:
Horizon Client Horizon Clients are available for Windows, Mac, Ubuntu Linux, iOS, and Android to provide
the connection to remote desktops from your device of choice. By installing Horizon Client on each endpoint
device, end users can access their virtual desktops from smartphones, zero clients, thin clients, Windows PCs,
Macs, and iOS and Android mobile devices. Unity Touch for Horizon Clients makes it easier to run Windows
apps on iPhone, iPad, and Android devices.
View Connection Server View Connection Server streamlines the management, provisioning, and deployment
of virtual desktops. Administrators can centrally manage thousands of virtual desktops from a single console.
End users connect through View Connection Server to securely and easily access their personalized virtual
desktops. View Connection Server acts as a broker for client connections by authenticating and directing
incoming user desktop requests.
View security server A View security server is an instance of View Connection Server that adds an additional
layer of security between the Internet and your internal network. Outside the corporate firewall, in the DMZ,
you can install and configure View Connection Server as a View security server. Security servers in the DMZ
communicate with View Connection Servers inside the corporate firewall. Security servers ensure that the only
remote desktop traffic that can enter the corporate data center is traffic on behalf of a strongly authenticated
user. Users can only access the desktop resources for which they are authorized.
View Composer View Composer is an optional service that enables you to manage pools of like desktops,
called linked-clone desktops, by creating master images that share a common virtual disk. Linked-clone
desktop images are one or more copies of a parent virtual machine that share the virtual disks of the parent,
but which operate as individual virtual machines. Linked-clone desktop images can optimize your use of
storage space and facilitate updates. You can make changes to a single master image through VMware vSphere
Client. These changes trigger View Composer to apply the updates to all cloned user desktops that are linked
to that master image, without affecting users settings or persona data.
View Agent (including Remote Experience Pack) The View Agent service communicates between virtual
machines and Horizon Client. You must install View Agent on all virtual machines managed by vCenter Server
so that View Connection Server can communicate with them. View Agent provides features such as connection
monitoring, virtual printing, persona management, and access to locally connected USB devices. View Agent is
installed in the guest OS.
View requires Active Directory for authentication and vCenter Server for virtual desktop provisioning and
management. SQL Server is required by vCenter Server, View Composer, and View Connection Server for
database purposes.
CO M P O NENT
QUAN TITY
VCPU
V R AM
HDD
4 (2 internal, 2
external)
16
50 GB
16
40 GB
View Composer
16
30 GB
T E C H N I C A L W H I T E PA P E R / 3 6
CO M P O NENT
QUAN TITY
VCPU
V R AM
HDD
10
140 GB
SQL Server
16
140 GB
RDSH server
32
24
40 GB
Windows 7 desktops
1,000
40 GB
Configuration
You use the Web-based View Administrator console to configure and manage View. You can also configure
View Connection Servers and security servers from the console. This reference architecture uses the following
settings:
Global Policies
View is configured to allow USB and PCoIP hardware acceleration, but to deny multimedia redirection (MMR).
MMR is out of the scope of this reference architecture.
View Configuration Settings
All View Connection Servers and security servers are added to the View instance to create the View pod. Each
externally facing View Connection Server is paired with two security servers.
A ThinApp repository was not configured. Instead, Workspace Portal is used to access ThinApp packaged
applications.
An event database is configured and implemented on a standalone SQL Server.
View Connection Server Settings
Workspace Portal is the delegated authentication mechanism for View. The SAML authenticator is set to the
externally facing fully qualified domain name of the Workspace Portal Gateway virtual appliance load-balanced
IP address.
vCenter Settings
vCenter is configured to reclaim virtual machine disk space (for SE sparse disks). View Storage Accelerator is
enabled with a 1 GB host cache. The settings 32, 50, 32, and 32 were used for concurrent operation limits. These
settings are increased based on the storage device capabilities.
Resources
The created application farm, AppFarm01, consists of 32 RDSH servers. The farm is used for all RDSH desktop
and application sessions.
An RDSH desktop pool allows users to access a Windows 2012 RDSH desktop running via PCoIP. An application
pool was created for each application tested and assigned to AppFarm01, again using PCoIP as the protocol.
An automated floating desktop pool with 800 Windows 7 linked-clone desktops is provisioned with View
Composer to enable load testing. No persistent disk or disposable disks are used. Replica and OS disks are
stored on the same NFS datastore. The default settings are used for the advanced storage options.
Another automated dedicated desktop pool with 200 Windows 7 full-clone desktops is provisioned using
vCenter Server. The full clones are deployed across the six 2 TB NFS datastores.
View Storage Accelerator is enabled to regenerate the manifest every 7 days.
T E C H N I C A L W H I T E PA P E R / 3 7
SPECIFICATION
Desktop OS
Hardware
CPU
Memory
1024 MB
Memory reserved
256 MB
Video RAM
Up to 128 MB
3D graphics
Off
NICs
VMXNet3 Adapter
40 GB
T E C H N I C A L W H I T E PA P E R / 3 8
SPECIFICATION
Desktop OS
Hardware
CPU
Memory
24 GB
Memory reserved
0 MB
Video RAM
128 MB
NICs
VMXNet3 Adapter
40 GB C:
174 GB View Planner workload data (not required
outside of testing)
T E C H N I C A L W H I T E PA P E R / 3 9
Architecture
Mirage consists of a Mirage Management server, Mirage server, and Windows file server, which are used to
manage and store data from Mirage clients (endpoints). Endpoints can be physical or virtual desktops (full
clones only).
External
Physical
Endpoints
Physical
Endpoints
Mirage
Server
Active
Directory
Mirage
Edge Gateway
Mirage
Admin
Mirage
Management
Server
Mirage
Console
Virtual Desktop
(Full Clone)
SQL
Mirage
Windows
File Server
Mirage
Server
VMware ESXi
VMware ESXi
To manage View desktops with Mirage, you need the following desktop virtual machines:
Reference Windows desktop virtual machine for base layer capturing
Windows desktop virtual machine for app layer capturing to add updates or new applications
Template Windows desktop virtual machine to create a persistent full-clone pool
T E C H N I C A L W H I T E PA P E R / 4 0
QUAN TITY
VCPU
V R AM
HDD
16
190 GB
Mirage Server
16
190 GB
SQL Server
Table 16: Recommended Sizing for Mirage for a 200 Full-Clone Desktop Deployment
Configuration
For this reference architecture, Mirage Management server is installed on one of the two Windows 2012 R2
virtual machines that also function as Mirage servers in the environment. The Mirage database is hosted on a
Windows 2012 R2 virtual machine that is running SQL Server 2008 R2 Standard Edition. The SQL Server also
hosts databases for View Composer and View events within the environment.
Each Mirage server is configured with a separate 150 GB virtual disk to host the server local cache. This location
is specified during server installation.
The Mirage Console is a plug-in that is installed on and run from Mirage Management server. It is the single pane
of glass for all Mirage management tasks across the environment; creating and deploying reference CVDs, base
layers, and application layers are performed in this management tool. Built-in wizards to perform many of these
tasks streamline management operations.
T E C H N I C A L W H I T E PA P E R / 4 1
This Mirage install manages images for 200 full-clone, persistent desktops in View. A Mirage client is installed
on a Windows desktop virtual machine (the reference desktop). A reference CVD is created, and a base layer
that had 138 Microsoft updates and an application is captured with the Capture Base Layer wizard. Do not
optimize the CVD policy for Horizon.
The following two services must be enabled for Mirage when optimizing the virtual machine template:
Volume Shadow Copy
Microsoft Software Shadow Copy Provider
The script attached to the VMware Horizon with View Optimization Guide for Windows 7 and Windows 8
disables these services. Either edit the script to enable these services or re-enable them on the template before
deploying your pool of full-clone desktops.
A Mirage client is installed on a Windows desktop machine, which is used to manage application updates. An
administrator can capture an application layer by recording the state of the virtual desktop before and after an
application install or update.
The Mirage client is then installed on a template virtual machine to be used for full-clone desktops. A full-clone
dedicated desktop pool can now be created using the template virtual machine and View. Each new virtual
desktop in the pool appears as a pending device in the Mirage Console.
T E C H N I C A L W H I T E PA P E R / 4 2
Now the full-clone desktops can be centralized using the CVD upload policy. Ensure that the CVD policy
includes the option Optimize for VMware Horizon View. This option disables uploading of user data, which can
cause considerable network, storage, and CPU load per desktop, so you cannot revert to a snapshot or restore
user files to previous versions. However, user data and applications are not lost on base layer or application
layer updates.
VM for App Layer
Capturing
Mirage Server
Mirage Management
Server
App Layer
Base Layer
Reference
CVD VM
UserDefined
Layer
(Optional)
App Layer
App Layer
App Layer
App Layer
Base Layer
Base Layer
Base Layer
Base Layer
VM-1
VM-2
VM-n
Template VM
Full-Clone Pool
Figure 22: Creating a Full-Clone Desktop Pool
An administrator can use the Mirage Management server to apply the base layer or application layers to the
full-clone desktops. Before applying new layers, it is recommended to run a layer conflict report to ensure that
the changes do not interfere with user-installed applications. After the layers are applied, the user can continue
without loss of user data or applications.
T E C H N I C A L W H I T E PA P E R / 4 3
User Experience
All client devices use either Workspace Portal or Horizon Client to connect to desktops and applications.
Horizon Client is publicly available for download and can be installed on many different devices. This reference
architecture uses the following Horizon Clients to access desktops and applications:
Apple iPhone 5
Apple iPad 2
Apple MacBook
Android tablet
Microsoft PC running Windows 7, single monitor
The Horizon Client is required to access View-hosted (RDSH) applications and View RDSH desktops. To access
View virtual desktops, either Horizon Client or a supported HTML5 browser is used.
Blast Features
With Horizon, IT can deliver desktops and applications to end users through a unified workspace using the Blast
features to enable consistently great experiences across devices, locations, media, and connections.
Blast includes the following features:
Adaptive UX Optimized access across the WAN and LAN through an HTML browser or the purpose-built
desktop protocol PCoIP
Multimedia High-performance multimedia streaming for a rich user experience
3D Rich virtualized graphics delivering workstation-class performance
Live communications Fully optimized unified communications and real-time audio and video support
(Horizon 6 includes support for Microsoft Lync with Windows 8)
Unity Touch Intuitive and contextual user experience across devices, making it easy to run Windows on
mobile
Local access Access to local devices, USB, and device peripherals
For this reference architecture, not all the Blast features were tested. The reference architecture tested the
adaptive UX, Unity Touch, and local access features.
T E C H N I C A L W H I T E PA P E R / 4 4
PCoIP Settings
PCoIP is the default protocol for View desktops and applications. It can be configured using a Group Policy
Administrative Template. An Active Directory organizational unit (OU) is established for both RDSH services
and virtual desktops. A single PCoIP policy is set across two OUs.
T E C H N I C A L W H I T E PA P E R / 4 5
Integration
This section details the integration considerations for this reference architecture.
Active Directory
The design uses OUs created specifically for View desktops and RD Session Hosts, An OU is an Active Directory
subdivision that contains users, groups, computers, or other OUs.
By creating dedicated OUs, View policies are applied via Group Policy Objects (GPOs) to all machines created
dynamically by View without knowing the workstation account name. RD Session Hosts can also be added
manually to an OU to apply RDSH-specific policies.
View has administrative templates for managing View virtual desktops and RD Session Hosts. Administrators
can import these templates and apply them via GPO to the respective OUs. This method provides a
straightforward and consistent way to manage policies specific to View virtual desktops and users.
For this reference architecture
The created OUs allow management of users, virtual desktops, and RDSH.
Virtual desktops are added automatically to the VirtualDesktops OU when provisioned by vCenter or View
Composer.
RD Session Hosts are added manually to the RDSH Services OU when provisioned using vCenter.
Group policies are applied to RD Session Hosts and virtual desktops for folder redirection, profile
management, and PCoIP.
RD Session Hosts and virtual desktops need Allow Log On Locally and Allow Log on Through Remote
Desktop Services to be set for the appropriate user groups.
Group policy loopback processing is enabled to ensure that policies are applied to users accessing computers
within the RDSH Services or Virtual Desktop OUs.
T E C H N I C A L W H I T E PA P E R / 4 6
SPECIFICATION
Version
OS
vCPU
vMemory
16 GB
vNICs
VMXNet3 Adapter
40 GB Windows OS
100 GB mssql01 SQL Server master and msdb
databases, VCDB, View Composer database,
View events database, and Mirage database
(.mdf, .ldf)
SPECIFICATION
Authentication method
SQL authentication
Recovery method
Simple
Database autogrowth
Enabled in 1 MB increments
Database size
5 GB
T E C H N I C A L W H I T E PA P E R / 47
SPECIFICATION
Authentication method
SQL authentication
Recovery method
Simple
Database autogrowth
Enabled in 1 MB increments
Database size
<1 GB
SPECIFICATION
VM hardware
OS
vCPU
vMemory
10 GB
vNICs
VMXNet3 Adapter
T E C H N I C A L W H I T E PA P E R / 4 8
ATTRI BUTE
SPECIFICATION
40 GB Windows OS
100 GB data disk:
User home drives \HomeDrives
View Persona \Persona
RDS profiles \RDSProfiles
Mirage Single Instance Store \Mirage
ThinApps \ThinApp
Availability
The system is resilient in the event of a component system failure. The design does not cover a disaster
recovery scenario in which the entire site is lost, but it does cover limited component failure.
ATTRI BUTE
S PECIFICATION
Workspace Portal
Gateway
Workspace Portal
Service virtual appliance
Workspace Portal
Connector virtual
appliance
At least two load-balanced View security servers are required for redundancy. If a
server fails, users are disconnected from their session. User data is not lost, and a
user can reconnect quickly. A third-party load balancer is required.
At least two load-balanced View Connection Servers are required for redundancy.
If a server fails, users are not disconnected from their session. A third-party load
balancer is required.
View desktop
If a desktop fails, the user might lose data. A new desktop can be provisioned if
the current desktop cannot be fixed. Alternatively, a pool of preprovisioned
desktops allows users to quickly connect to another desktop.
RD Session Host
Users are disconnected from their session. View supports RDSH farms in which
multiple RD Session Hosts are pooled for desktop or application access. Users
can reconnect to a different RD Session Host, but might have lost data.
vCenter Server
If vCenter Server fails, View is not affected. Virtual desktops can still be
connected, but new desktops cannot be provisioned. Workloads are not balanced
across clustered hosts. Desktops cannot be powered on or off.
T E C H N I C A L W H I T E PA P E R / 4 9
ATTRI BUTE
S PECIFICATION
ESXi host
If a virtual desktop host fails, the user loses connection to the desktop. The
desktop can be migrated to another host in the cluster and started (if using
shared storage). The user can connect to the desktop within minutes. Users might
lose data.
If all hosts in a View desktop cluster lose connectivity or fail, users assigned to the
desktop pools hosted on the affected cluster cannot access a virtual desktop until
the cluster is restored.
Management cluster
failure
The service is unavailable if the management cluster fails. Users directly accessing
virtual desktops and RDSH sessions are disconnected, but might lose services,
such as printing, Active Directory, and user profile data.
T E C H N I C A L W H I T E PA P E R / 5 0
Test Results
Testing consisted of manual functional tests to highlight usability and manageability, operational tests to verify
provisioning and administration tasks, and workload testing to validate performance and the user experience.
Functional Testing
Functional testing was performed across a number of client devices manually and also included common
administrative tasks.
After Horizon is installed and configured, it takes 14 minutes to set up and provide access to RDSH desktops
and applications. It takes an additional 18 minutes to provision an initial pool of 100 desktops. Users can connect
to desktops or applications in 10 seconds after being authenticated.
FUNC TI O NAL TEST
TIME TO
COMPLETE
VALIDATION
R ESULT
7 minutes
PASSED
3 minutes
PASSED
23 minutes
PASSED
6 minutes
PASSED
3 minutes
PASSED
5 minutes
PASSED
2 minutes
PASSED
2 minutes
PASSED
2 minutes
PASSED
10 seconds
(68 second
reconnect)
PASSED
T E C H N I C A L W H I T E PA P E R / 5 1
TIME TO
COMPLETE
VALIDATION
R ESULT
810 seconds
(68 second
reconnect)
PASSED
810 seconds
(68 second
reconnect)
PASSED
63 minutes
PASSED
89 minutes
PASSED
2 minutes
PASSED
5 minutes
PASSED
10 seconds
(68 second
reconnect)
PASSED
10 seconds
(68 second
reconnect)
PASSED
10 seconds
(68 second
reconnect)
PASSED
810 seconds
(810 second
reconnect)
PASSED
2 minutes
PASSED
35 minutes
PASSED
T E C H N I C A L W H I T E PA P E R / 5 2
To satisfy View Planner test requirements, Group A operation latencies had to be less than 1 second, and Group
B application latencies less than 6 seconds. The workload passed comfortably: The Group A scored 0.513369
seconds, and Group B scored 4.023978 seconds.
T E C H N I C A L W H I T E PA P E R / 5 3
TEST GRO UP
O P ERATI ON TYPE
R ESU LT
Group A
Group B
The View Planner workload test performed five test run iterations. During this time, ESXi CPU usage averaged
71 percent, with a peak of 96 percent. The four RDSH servers averaged 70 percent CPU usage, with a peak of 96
percent.
T E C H N I C A L W H I T E PA P E R / 5 4
The ESXi 5.5 host averaged 78 percent memory usage, with a peak of 79 percent. The four Windows 2012 RDSH
servers averaged 40 percent memory usage, with a peak of 59 percent.
RDSH server commands per second peaked at 218 during View Planner workload. Peak reads reached 100 per
second, and peak writes 167 per second.
T E C H N I C A L W H I T E PA P E R / 5 5
T E C H N I C A L W H I T E PA P E R / 5 6
The Mirage server had low resource usage throughout testing, with peak CPU usage of 26 percent and peak
memory usage of 13 percent.
The Mirage server had a peak network transmit of 85 MBps and receive of over 15 MBps.
T E C H N I C A L W H I T E PA P E R / 5 7
Average full-clone CPU usage peaked at 57 percent. Average full-clone memory usage peaked at 97 percent
during the base layer assignment operation.
Average full-clone network transmits peaked at 64 KBps, while average network receives peaked at 4485 KBps
during the base layer assignment operation.
T E C H N I C A L W H I T E PA P E R / 5 8
The base layer assignment operation goes through an intensive read and write phase, with full-clone virtual
desktop average reads per second peaking at 209 per desktop, and average writes per second peaking at 304
per desktop.
T E C H N I C A L W H I T E PA P E R / 5 9
VDI &
RDSH VMs
Management Hosts
1 x Supermicro 2027TR Chassis
3 x Supermicro X9DRT-HF
System Boards
16 Cores, 128 GB RAM
- Horizon 6 Server Workload VMs
Figure 35: Hardware Components
Server
CO M P O NEN T
QUAN TITY
20
Storage
Network
T E C H N I C A L W H I T E PA P E R / 6 0
ARE A
Software
CO M P O NEN T
QUAN TITY
2,000 users
vCenter Server
Included in Horizon
Enterprise Edition
ESXi
Included in Horizon
Enterprise Edition
Included in Horizon
Enterprise Edition
1,000 users
1,000 users
T E C H N I C A L W H I T E PA P E R / 6 1
Web
Interface
Harness
VMware vCenter
View
View
Planner Appliance
Client
Management
Manage
Desktop
Management
Virtual Desktops
Physical
Servers
Physical
Servers
Remote
Display
Protocol
Storage
Storage
GR OU P B
GR OU P C
AdobeReader: Browse
AdobeReader: Open
7zip: Compress
AdobeReader: Close
Excel_Sort: Open
PowerPoint: SaveAs
AdobeReader: Maximize
Excel_Sort: Save
T E C H N I C A L W H I T E PA P E R / 6 2
GRO UP A
GR OU P B
AdobeReader: Minimize
Firefox: Open
Excel_Sort: Close
IE_ApacheDoc: Open
Excel_Sort: Compute
IE_WebAlbum: Open
Excel_Sort: Entry
PowerPoint: Open
Excel_Sort: Maximize
Word: Open
Excel_Sort: Minimize
Word: Save
GR OU P C
Firefox: Close
IE_ApacheDoc: Browse
IE_ApacheDoc: Close
IE_WebAlbum: Browse
IE_WebAlbum: Close
PowerPoint: AppendSlides
PowerPoint: Close
PowerPoint: Maximize
PowerPoint: Minimize
PowerPoint: ModifySlides
PowerPoint: RunSlideShow
Word: Close
Word: Maximize
Word: Minimize
Word: Modify
Table 25: View Planner Operations
T E C H N I C A L W H I T E PA P E R / 6 3
Run Phases
For this test, View Planner performed a total of five iterations:
Ramp up (first iteration)
Steady state (second, third, fourth iterations)
Ramp down (fifth iteration)
During each iteration, View Planner reported the latency for each operation performed within each virtual
machine.
Quality of Service
QoS, determined separately for Group A user operations and Group B user operations, is the 95th percentile
latency of all the operations in a group. The default thresholds are 1.0 seconds for Group A, and 6.0 seconds for
Group B.
T E C H N I C A L W H I T E PA P E R / 6 4
References
VMware Product Page
View Documentation
View Technical Resources
VMware End-User Computing Solutions
VMware Desktop Virtualization Services
Horizon with View Optimization Guide for Windows 7 and Windows 8
Antivirus Best Practices for Horizon View 5.x
View Planner 3 Resources
View Storage Accelerator
VMware vCenter Database Performance Improvements and Best Practices for Large-Scale Environments
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2015 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at
http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be
trademarks of their respective companies. Item No: VMW-TWP-HORIZ6REFERENCEARCH-USLET-20150625-WEB