Académique Documents
Professionnel Documents
Culture Documents
Table of Contents About Desktop Validated Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Building a HITECH Healthcare Infrastructure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 The Challenge of Achieving Meaningful Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Securing Protected Health Information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Ensuring Continuous Availability for Non-Stop Care . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Requirements for High Availability (HA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Summary of Main Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Solution Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Business Challenge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Technology Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 About VMware View 4.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 VMware View 4.6 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 About Imprivata OneSign Authentication Management . . . . . . . . . . . . . . . . . . . . . . . . . 13 Application SSO Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Vblock Infrastructure Platform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Management Solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Key Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Virtualization Operating System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Compute and Network Solution and Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Network Infrastructure and Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Storage Solution and Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Storage Infrastructure and Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Application Delivery Control (ADC) and Network Load Balancing (NLB). . . . . . . . . . 20 Cisco Application Control Engine (ACE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 HAProxy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 AlwaysOn Desktop Design Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Architecture and Design of VMware View on VCE Vblock Platforms . . . . . . . . . . . . . . 27 Compose/Recompose Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Client Access Devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Solution Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 VCE Vblock Configuration Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Additional Components Configuration Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Unified Computing System Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 LAN Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 SAN Configuration (VCE). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Storage Array (EMC Celerra NS960) Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 CLARiiON Pools, RAID Groups and LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Celerra File Systems and NFS Exports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Microsoft Distributed File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 VMware Datastores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Blade Provisioning and OS Installation (VCE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 VMware Virtual Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 VMware vSphere ESXi Servers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 VMware vSphere Advanced Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Datastores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 VMware View 4.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Virtual Desktop Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Storage Synchronization Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Imprivata OneSign. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Test Setup and Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Test Harness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 AlwaysOn Desktop Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Stateless Desktop Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Active/Active Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Test Harness #2 Using a Proximity Card (Manual). . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Validation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Test Harness #1 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Test Harness #2 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Additional Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Acknowledgements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 About VCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Introduction
The healthcare industry is undergoing a major technological transformation. Electronic medical record (EMR) systems, mobile devices and other innovations hold the promise of improving the safety and quality of healthcare delivery. As the Department of Health and Human Services states, EMR technology can provide clinicians and patients with better access to more complete and accurate information, which empowers patients to take a more active role in their health1 . Many studies also show that EMR systems have the potential to reduce long-term operating costs2 and lower the occurrence of malpractice claims3. As with other clinical applications, electronic medical records must be delivered to the actual point of care, which refers to the ability or requirement to physically bring a solution to the patients bedside or an exam room. Examples of electronic point-of-care solutions include wall-mounted displays and mobile devices in exam rooms that provide clinicians with access to patient records and computerized physician order-entry systems. These solutions play a central role in enabling healthcare organizations to accelerate their journey from paperbased to electronic healthcare information systems.
1 Source: U.S. Department of Health and Human Services, Electronic Health Records and Meaningful Use. http://healthit.hhs.gov/portal/server.pt?open=512&objID=2996&mode=2 2 Source: Health Data Management, Study: EHR cuts long-term operating costs. http://www.healthdatamanagement.com/news/ehr-cuts-long-term-operating-costs-41218-1.html 3 Source: Computerworld, Study: Electronic medical records reduce malpractice claims. http://www.computerworld.com/s/article/9122063/Study_Electronic_medical_records_reduce_malpractice_claims 4 Source: Center for Medicare & Medicaid Services, Overview of EHR incentive programs. http://www.cms.gov/ehrincentiveprograms/
For example, nurses at many hospitals use a variety of endpoints, logging in to these endpoints at least 50 times during a single shift. Every time a doctor or nurse logs in on a new endpoint, it can take upwards of three minutes to bring up the users desktop environment, launch the correct application and find the necessary patient information. Over the course of a single shift, this approach takes a significant amount of time away from patient care. Even relatively simple tasks, such as quickly analyzing a medical image or even dictation, can take up to five times as long because the clinician has to travel to a handful of dedicated workstations across the hospital. A physicians time is expensive and valuable to a hospital. When a physician is unproductive because of technology issues, patient care and billing are both affected. This problem is amplified by the fact that attracting and retaining the best and brightest clinicians is a constant and expensive struggle for healthcare organizations. More and more, clinicians entering the workforce are demanding a consumer-like user experience in the workplace, and hospital IT departments are being asked (or required) to support consumer devices such as Apple iPad tablets. Competition for talent among local hospital systems is fierce and expensive, and many younger doctors not only expect technology, but also will also actively seek organizations with thought leadership in this area.
5 Source: CNN, VA will pay $20 million to settle lawsuit over stolen laptops data. http://articles.cnn.com/2009-01-27/politics/va.data.theft_1_laptop-personal-data-single-veteran
reliability and availability to ensure patient safety. If a caregiver has to make a fast medical decision but cant access the patients records because of a service outage or computer problem, the situation can escalate into a Severity-1 event and the consequences can be quite serious. In short, EMR systems must be accessible as a non-stop service that is available to clinicians wherever and whenever they need patient information. Unfortunately, the old device-centric approach to endpoint management makes it extremely difficultif not impossibleto protect every desktop, laptop, hospital computer cart, and mobile device in use. And even if the systems are up and running, patient information is not always immediately available, since clinicians still suffer from long login times, password management issues or they waste precious time having to travel across the hospital to get to a machine where they can access data and perform specific tasks. When taken together, the challenges of achieving meaningful use, protecting patient information, and ensuring continuous access to point-of-care solutions have created a dilemma that cant be solved with traditional approaches to desktop and application management. To overcome these and other challenges, healthcare providers need a new approach to point-of-care delivery: one that will enable them to modernize their IT infrastructures so they can improve patient outcomes and get the most from the millions of dollars they are investing in EMR technology. This paper, a collaboration of the VCE company, Imprivata, and Vital Images, details a new reference design for delivering clinical desktops and patient care applications as non-stop services. This new reference design for delivering a non-stop point-of-care solution provides all of the benefits, efficiencies of scale, and 24 X 7 uptime demanded of a public cloud service from a private cloud environment.
The High Availability/Disaster Recovery (HA/DR) concerns are: Uptime: Corresponds closely to Recover Time Objectives (RTOs). DR solutions should offer quick restores with minimal or no manual steps after the recovery Reliability: Corresponds closely to Recovery Point Objectives (RPOs). Addressing database transactional consistence, avoiding corrupted file systems, and ensuring systems boot when restored are key to addressing this concern Cost: Solution needs to be affordable. The cost of many different software solutions or replicating storage arrays can prevent DR solutions from getting off the ground Complexity: How to reduce complexity? How many different systems are involved with the strategy? A DR plan typically is thick and complicated in procedures
Solution
This document describes the Reference Architecture (RA) for highly available VMware View 4.6 virtual desktops or AlwaysOn Point of Care on the Vblock Infrastructure Platform.
end-user attempts to reconnect, their desktop session is failed over (redirected by the Cisco ACE appliances) to the surviving View infrastructure. A failback event within this reference architecture occurs when the failed site is re-enabled (View environment becomes accessible). The end-user will not automatically be connected to their original (primary) desktop until he disconnects from his failed over (secondary) desktop and tries to reconnect to the View environment. Folder redirection is accomplished using Microsoft AD GPOs. The GPO maps the end-users My Documents folder to a DFS global name space. Imprivata OneSign automatically and securely connects users to applications that require authentication, and consists of the following parts: The OneSign Server hosts the OneSign management system, stores data, provides network services, and more. Managing OneSign hardware, network, and security settings, the server also manages all appliance functions (e.g., Backup/Restore), and each appliance is managed independently. OneSign settings are controlled through the intuitive OneSign Administrator. The OneSign Server can be deployed as a pair of physical or virtual appliances. Each appliance is connected to the network, and each is connected to the other by an isolated failover connection. The appliance that handles the daily OneSign traffic is the primary appliance. The backup appliance is called the failover appliance. The OneSign Agents reside on client-side workstations to manage user access and upload user activity data to the Appliance Pair. The Agent handles authentication of users locally through passwords, biometrics, or ID tokens with or without robust password policies. Once a user authenticates to the OneSign system, the user is automatically signed onto deployed applications as they are launched. The OneSign Agent handles the local transaction of proxying users credentials to applications and domains. The OneSign Agent downloads credential and application information from the OneSign Server at login and queries the server for changes at an interval determined on the OneSign Administrator Properties page. The OneSign Administrator is a web-based interface for managing the OneSign Server or the Appliance Pair.
Audience
This document is intended for use by sales engineers, field consultants, advanced services specialists, and customers who will configure and deploy a highly available virtual desktop solution that provides Single Sign-On (SSO) capabilities to provide desktops as a managed service.
Scope
This document provides an overview of a highly available, VMware View 4.6 solution leveraging multiple (in this case, two (2)) Vblock Infrastructure Platforms. Enterprises can now realize desktop scalability and high availability by deploying the AlwaysOn Point of Care solution across multiple datacenters. A typical disaster recovery plan usually only ensures business critical applications/environments are protected and recoverable. AlwaysOn Point of Care leverages an Active-Active design model, which ensures an end-user has one or more standby desktop available at all times. Should a site go down, the end-users can quickly access their stand-by desktops by re-launching the View client on their endpoint compute node (laptop, thin terminal, desktop, etc.). This RA illustrates a highly available, virtual desktop solution for healthcare professionals, but can be leveraged in other end-user environments as desired. The following aspects are addressed within this RA: An architectural overview. Failover validation results. Descriptions of the hardware and software components used in the configurations of the computer, storage, network, and virtualization components of the solution. Information for configuring a Vblock platform for deploying VMware View 4.6.
Solution Purpose
The VMware AlwaysOn Point of Care Solution on the Vblock platforms allows: The consolidation of a desktop environment into one or more infrastructures behind the firewall, making it easy to update the operating system, patch applications, ensure compliance, perform application migrations, and provide support from central locations. The solution delivers a consistent user experience for professionals whether they are within a hospital or at a remote location. Using this solution, less time is spent reacting to regulatory compliance and security issues, and more time can be spent adding value to the healthcare institution/facility. The leveraging of site-aware distribution mechanisms and the deployment of multiple desktop infrastructures, so end-users always have access to their desktops. A simplified desktop environment with pre-integrated, validated units of infrastructure providing virtualized compute, network, and storage resources. With validated configurations, one can significantly reduce the time spent on testing and development. Therefore, time to production is accelerated. VCE builds integrated, validated infrastructure called Vblock platforms, built from best-in-class components for compute, network, storage, and virtualization, from Cisco, EMC and VMware (respectively). These platforms allow for massive consolidation and rapid provisioning of compute, network, and storage resources on an on-demand basis.
Business Challenge
The challenges related to traditional desktop deployment and day-to-day administration include lost laptops containing patient data, security breaches related to viruses or hackers, or simply ensuring IT resources can maintain the required service level agreements (SLAs). In addition to the challenges of operational management, IT must also consider implications of broader system-wide issues such as compliance, corporate governance, and business continuity strategies.
Technology Solution
Enterprises are turning to virtual desktop technologies to address the operational and strategic issues related to traditional desktop environments and disaster recovery/business continuance (DR/BC). VMware View provides a virtual desktop environment that is secure, cost effective, and easy to deploy. VMware View also has the capability to meet the demanding needs of the different types of user profiles whether on the LAN or on the WAN/MAN. Combining VMware, Cisco ACE, and Imprivata SOS with the Vblock platform ensures high levels of user experience and desktop availability, which in turn means acceptance of the virtual desktop deployment within organizations.
If an appliance fails (below left), other appliances in the site take the load (below right).
Appliances in multiple sites can provide fault tolerance by serving as backups to one another over a WAN. User enrollments, policies, and SSO data are constantly synchronized among sites. If all appliances in a site are inaccessible, OneSign Agents can communicate with appliances in other sites and the switchover occurs automatically. If an entire site is down, appliances at another site can serve agents. Primary and secondary failover sites: For each site in your OneSign enterprise, you can designate a primary and a secondary failover site. Go to the Sites tab under Properties and drill down to a specific site to set an assignment. You do not need to specify failover rules at an appliance level. OneSign Agents automatically fail over to appliances within the same site first and only then will fail over to an appliance within the failover sites specified. Users are always challenged when failing over to an appliance in another site (because a new OneSign session must be established). Agent determination of a home site: Each Agent determines its home site based on the workstations IP configuration. According to the OneSign enterprise topology, each active site has a list of IP address ranges for subnets belonging to this site. The initial attempt to determine the Agents home site involves matching the workstation IP address against any range in any site. If a range is found, then the site owning this range is considered to be the home site for the Agent. In case this direct IP matching fails, the Agent analyzes the routing table on the workstation. The route lookup involves trying to find a route that covers any IP range for any site. Route lookup helps to determine location for a VPN client outside the corporate network when direct IP address matching does not work. IP ranges are not meant for restricting access. Instead, they help determine the preferred site to use. With this in mind, in most corporate environments there exists a non-default route to the corporate network. Therefore, for several sites with restrictive IP ranges within the corporate network sub-net, the first one will be chosen through the route rules. Agent failover: Once all servers in the home site become unavailable, Agents will switch to using a failover site (if specified). After a failover is completed, the OneSign session will preserve the connection to the appliance in the failover site for the duration of the session lifetime. Once appliances in the home site become available again, new sessions authenticated on computers that belong to this site will start connecting back to the home site. However, active sessions do not automatically switch back. To force Agents to fail back to the active session, users must lock and unlock their OneSign session or log out and log back in.
The workload simulation performs knowledge worker desktop performance with 400 concurrent users accessing the systems. The configuration is set as hot-standby DR and no action is required by the end-user. After the failover event, users can retrieve a new stateless desktop instantly and continue a desktop and application workload.
The building blocks of a Vblock Infrastructure Platform comprise core technologies that together provide template-based virtualization. Using template-based virtualization to allocate and provision resources, an enterprise can: Reduce performance bottlenecks and configuration errors through automation of resource configuration tasks. Enable the rapid deployment of resources using a template, thereby reducing operational expenses and costs.
Management Solution
EMC Ionix Unified Infrastructure Manager/Provision Center (UIM/PC) provides simplified management for Vblock Infrastructure Platforms by combining provisioning as well as configuration, change, and compliance management.
Key Features
Manage Vblock Infrastructure Platforms as a single entity. Integrate with enterprise management platforms. Consolidate views into all Vblock Infrastructure Platform infrastructure components, including compute, network, and storage. Achieve system-wide compliance through policy-based management. Easily deploy hardware and software, ESXi and infrastructure provisioning, and disaster recovery infrastructure. With EMC Ionix UIM/PC, you can combine management of the individual components in Vblock Infrastructure Platforms into a single entity to reduce operational costs and ease the transition from physical to virtual to private cloud infrastructure. Centralizing provisioning, change control, and compliance management across Vblock Infrastructure Platforms reduces operating costs, ensures consistency, improves operational efficiency, and speeds deployment of new services. With EMC Ionix UIM taking care of your Vblock Infrastructure Platform, you can more easily make the management transition from physical to virtual to private cloud infrastructure. Compared to building and integrating pieces individually, the advantages provided by UIMs integrated management solution become obvious. Although some tools integrate basic health and performance data from the compute, network, and storage domains, the operationally critical areas of configuration, change, and compliance management remain separate for the most part. This type of disjointed, distributed management can result in: Higher ongoing operational costs and reduced ongoing operational efficiency. Slower service deployments. Inconsistent management across Vblock Infrastructure Platforms. Inability to automatically ensure configurations for accuracy and compliance. Inability to simultaneously and easily restore multiple elements to a compliant state. Less overall flexibility in supporting the IT needs of the business.
The Cisco Application Control Engine (Cisco ACE) provides a highly available and scalable datacenter solution from which the VMware View environment can benefit. The Cisco ACE is available as an appliance or integrated services module in the Cisco Catalyst 6500 platform. Using IP address polices (or other identifiers), a single View Connection FQDN can be configured to intelligently distribute requests for virtual desktops to the multiple VMware View environments and, if desired, to offload the SSL encryption to ensure better utilization of View Connection Server resources. The Cisco ACE features and benefits include the following: Device partitioning (up to 250 virtual Cisco ACE contexts). Load-balancing services (up to 16 Gbps of throughput capacity and 325,000 Layer-4 connections per second). Centralized, role-based management through Application Network Manager (ANM) GUI or CLI. SSL offload (up to 15,000 SSL sessions per second through licensing). Support for redundant configurations (intra-chassis, inter-chassis, and inter-context). Cisco Application Networking Manager (ANM) Software is part of the Cisco Application Control Engine (ACE) product family. It is a critical component of any datacenter or cloud computing architecture that requires centralized configuration, operation, and monitoring of Cisco datacenter networking equipment and services. Cisco ANM provides this management capability for Cisco ACE devices. Cisco ANM 4.1 integrates into VMware vCenter, allowing access to Cisco ANM to add, delete, activate, and suspend traffic and change load-balancing weights for servers benefiting from Cisco ACE load-balancing services. Additionally, users can also access ANMs real server monitoring graphs, greatly enhancing users knowledge of the true operations of their applications in real time. To speed implementation, server administrators can now use Cisco ANM discovery tools to automate importation and mapping of virtual machines to existing Cisco ACE real servers as shown below.
Cisco ACE optimizes overall application availability, security, and performance by delivering application switching and load balancing. Below is the configuration used for this reference architecture:
crypto csr-params ACE country US state GA common-name desktops.rtp.vce.com
access-list VDI line 8 extended permit tcp any any eq www access-list VDI line 16 extended permit icmp any any access-list VDI line 24 extended permit tcp any any eq https probe icmp PING interval 3 faildetect 1 passdetect interval 5 passdetect count 1 rserver host ProxyA-1 ip address 10.1.56.49 inservice rserver host ProxyA-2 ip address 10.1.56.54 inservice rserver host ProxyB-1 ip address 10.1.68.49 inservice rserver host ProxyB-2 ip address 10.1.68.54 inservice rserver redirect REDIRECT-TO-HTTPS webhost-redirection https://%h%p 301 inservice serverfarm host HAproxyFarm-A probe PING rserver ProxyA-1 80 inservice rserver ProxyA-2 80 inservice serverfarm host HAproxyFarm-B probe PING rserver ProxyB-1 80 inservice rserver ProxyB-2 80 inservice serverfarm redirect REDIRECT-HAproxyFARM rserver REDIRECT-TO-HTTPS inservice parameter-map type ssl vDesktop_SSL_Parameter_Map authentication-failure ignore sticky ip-netmask 255.255.255.255 address source HAproxyFARM-A-STICKY timeout 5 replicate sticky serverfarm HAproxyFarm-A backup HAproxyFarm-B sticky ip-netmask 255.255.255.255 address source HAproxyFARM-B-STICKY timeout 5
replicate sticky serverfarm HAproxyFarm-B backup HAproxyFarm-A ssl-proxy service Desktops-SSL key desktops.rtp.vce.com cert newdesktops.cer ssl-proxy service SSL_SERVICE ssl-proxy service proxy-1 key key.pem cert cert.pem ssl-proxy service vDesktop_SSL_Proxy key desktops.rtp.vce.com cert newdesktops.cer ssl advanced-options vDesktop_SSL_Parameter_Map class-map match-all HTTP-VIP 2 match virtual-address 10.1.54.16 tcp eq www class-map match-all HTTPS-VIP 2 match virtual-address 10.1.54.16 tcp eq https class-map type http loadbalance match-any SiteA-Subnet 2 match source-address 10.1.80.0 255.255.255.0 3 match source-address 10.1.81.0 255.255.255.0 4 match source-address 10.1.82.0 255.255.255.0 5 match source-address 10.0.1.0 255.255.255.0 class-map type http loadbalance match-any SiteB-Subnet 2 match source-address 10.1.83.0 255.255.255.0 3 match source-address 10.1.84.0 255.255.255.0 4 match source-address 10.1.85.0 255.255.255.0 5 match source-address 10.1.55.0 255.255.255.0 6 match source-address 10.223.252.128 255.255.255.128 policy-map type loadbalance first-match HAproxy-VIP-LB-POLICY class SiteA-Subnet sticky-serverfarm HAproxyFARM-A-STICKY class SiteB-Subnet sticky-serverfarm HAproxyFARM-B-STICKY class class-default sticky-serverfarm HAproxyFARM-A-STICKY policy-map type loadbalance first-match HTTP-VIP-l7slb class class-default serverfarm REDIRECT-HAproxyFARM policy-map type loadbalance first-match HTTPS-VIP-l7slb class SiteA-Subnet sticky-serverfarm HAproxyFARM-A-STICKY class SiteB-Subnet sticky-serverfarm HAproxyFARM-B-STICKY class class-default sticky-serverfarm HAproxyFARM-A-STICKY policy-map type loadbalance first-match REDIRECT-POLICY class class-default serverfarm REDIRECT-HAproxyFARM policy-map type loadbalance first-match VIP-VDI-l7slb
class SiteA-Subnet sticky-serverfarm HAproxyFARM-A-STICKY class SiteB-Subnet sticky-serverfarm HAproxyFARM-B-STICKY class class-default sticky-serverfarm HAproxyFARM-A-STICKY interface vlan 314 ip address 10.1.54.14 255.255.255.0 peer ip address 10.1.54.13 255.255.255.0 access-group input VDI nat-pool 1 10.1.54.15 10.1.54.15 netmask 255.255.255.255 pat service-policy input VDI-LB no shutdown ip route 0.0.0.0 0.0.0.0 10.1.54.1 snmp-server contact ACE snmp-server location RTP snmp-server community public group Network-Monitor snmp-server host 10.0.1.45 traps version 2c public snmp-server enable traps slb vserver snmp-server enable traps slb real snmp-server trap link ietf
HAProxy
HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads while needing persistence or Layer7 processing. Supporting tens of thousands of connections is clearly realistic with todays hardware. Its mode of operation makes its integration into existing architectures very easy and riskless, while still making it possible to avoid exposing fragile web servers to the Internet, such as below:
HAProxy implements an event-driven, single-process model that enables support for a very high number of simultaneous connections at very high speeds. Multi-process or multi-threaded models can rarely cope with thousands of connections because of memory limits, system scheduler limits, and lock contention everywhere. Event-driven models do not have these problems because implementing all the tasks in user-space allows a finer resource and time management. The down side is that those programs generally dont scale well on multiprocessor systems. Thats the reason why they must be optimized to get the most work done from every CPU cycle. The HAProxy can be downloaded from http://haproxy.1wt.eu/ and is known to reliably run on the following OS/ Platforms: Linux 2.4 on x86, x86_64, Alpha, SPARC, MIPS, PARISC Linux 2.6 on x86, x86_64, ARM (ixp425), PPC64 Solaris 8/9 on UltraSPARC 2 and 3 Solaris 10 on Opteron and UltraSPARC FreeBSD 4.10 - 6.2 on x86 OpenBSD 3.1 to -current on i386, amd64, macppc, alpha, sparc64 and VAX (check the ports) Once the Linux VM was implemented and the HAProxy installed, the /etc/haproxy/haproxy.cfg file was modified to support basic HTTP (80) load balancing across the four (4) View Connection Servers in each site.
global log 127.0.0.1 local0 log 127.0.0.1 local1 notice user haproxy group haproxy maxconn 4096 daemon defaults applications HTTP log global mode http balance roundrobin option dontlognull option redispatch contimeout 10000 clitimeout 300000 srvtimeout 300000 maxconn 60000 retries 3 listen http 10.1.68.49:80 cookie SERVERID insert nocache indirect server vgangabvmvcs01 vgangabvmvcs01.rtp.vce.com:80 cookie sa1 check server vgangabvmvcs2 vgangabvmvcs2.rtp.vce.com:80 cookie sa2 check server vgangabvmvcs3 vgangabvmvcs3.rtp.vce.com:80 cookie sa3 check server vgangabvmvcs4 vgangabvmvcs4.rtp.vce.com:80 cookie sa4 check listen stats bind 10.1.68.49:8888 stats uri /
The last component of the configuration above (listen stats) enables a web-based GUI that illustrates the current status of the load balanced hosts as shown below:
In both sites, pools with identical configuration options should be created using the same master image. Generally these should be floating pools of desktops that are refreshed or rolled back to their original state after each user logs off. This prevents the unnecessary buildup of temporary files and personal information on each desktop. When sizing the pools, take into account the maximum size of the pool during failover. The pool should have the capacity to handle (or expand to handle) 100% of the users in event of an emergency. Provisioning extra desktops up front will allow for faster logon in an emergency. The unused desktops can be left powered off to conserve resources, but each step (including a power on operation) that needs to be performed at failover adds time to the users logon experience. To maintain the identical appearance, it is advisable to build and prep the master image, allow (or force) it to replicate from the source to the non-source location before composing either location. Once the master image is in place at both sites, a typical compose or recompose operation can be performed. Note: This is not a fully automated process. The administrator should perform the same task on the pool at both sites and set the options identically as much as possible. End-users could notice any differences in naming or configuration. If desktop availability is more critical than having the latest version of the image, administrators can simply change the Default Image for New Desktops on the pool and set the recompose to occur on user logoff. This will gradually replace the older images with the newer updated version as desktops become available for maintenance. If having a specific version of the desktop image is a higher priority and a downtime window is established, the entire recompose of a pool can be completed by forcing users to logoff. This will take less time to complete and will keep the pools in a more consistent state, but will prevent use of the pools during the operation. For environments with more than one pool or more than one master image, the process is the same on a poolby-pool basis: Designate a Source site for the master image, and do not modify that image on any other site. Make sure that the master image virtual machine is being replicated effectively from the Source to the NonSource site. Any action that is performed on the pool at the Source location should also be performed at the Non-Source site. This includes pool creation, user entitlement, recompose operations, application entitlement (where used), and other general modification of pool settings. Not all pools need to be protected. If you have pools that do not perform critical functions, choose a site for that pool and do not perform the replication or pool creation steps on the other site. If that site becomes unavailable, so will the desktops associated with it. Note: If a pool is only going to exist in one site, users of that pool will need to be directed to that site by the top-level load balancers. Choosing some pools for protection and leaving other non-critical pools out of the process could substantially reduce the overall hardware costs.
Thin Client Operating systems can be Windows Embedded Standard, Windows XPe, CE, Linux, or proprietary distribution Multi-monitor support Support for VMware View Secure lockdown, but endpoint security protection is required In addition, VMware View Client also runs on the Apple iPad tablet and traditional notebook computers for desktop mobility access. For the full access to the VMware View HCL, visit: http://www.vmware.com/resources/compatibility/search.php?deviceCategory=vdm
Solution Validation
VCE Vblock Platform Configuration Details
This section provides the Vblock Platform configuration details: Hardware Cisco Nexus 5010 and 5020 Switches (Site A used 5010s, and Site B used 5020s) Unified Computing System with (per site): Two (1) B200 M2 Series Blades with 3.33 GHz Intel Xeon 6 core CPU, 96GB RAM (using 12, 8GB 1067MHz DIMMs) Two (2) B250 M2 Series Blades with 3.33 GHz Intel Xeon 6 core CPU, 192GB RAM (using 48, 4GB 1067MHz DIMMs) One (1) B440 M1 Series Blades with 2.266 GHz Intel Xeon 8 core CPU, 128GB RAM (using 32, 4GB 1067MHz DIMMs) EMC One (1) Celerra NS960 Storage array (per site)
REFERENCE ARCHITECTURE GUIDE / 29
Software Cisco NX-OS 5.0(2)N2(1) on 5010 NX-OS 4.2(1)N2(1) on 5020 UCS Manager 1.3(1p) EMC Celerra DART 6.0.40-5 CLARiiON FLARE 4.30.00.5.512 PowerPath/VE 5.4 SP2 (build 298) Ionix Unified Infrastructure Manager (UIM) 2.1.0.0.543 Unisphere Management Console 1.0.0.14 Virtual Storage Integrator (VSI) for VMware vSphere 4.0.1.67
VMware vSphere ESXi 4.1 Patch 1 (320092) vCenter Server 4.1 Update 1 (345043) vCenter Update Manager 4.1 View 4.6 (366101) View Agent (4.6.0-366101) with VMware SVGA 3D Driver (7.14.1.49)
Other Required in addition to the above components is an environment with Active Directory, CA, DFS with replication enabled, DNS, DHCP, and Microsoft Exchange 2010.
Software Cisco Application Networking Manager (ANM) 4.2 (0) ACE OS A4(2.1) ACE/ANM vCenter Plug-in 1.0.1 IOS 12.2(55) SE1 on 3750s IOS 12.2(33) SXI5 on 6506 NX-OS 4.2(5) on 9506 NX-OS 5.1(2) on 7010s
Other HAProxy 1.4.10 Imprivata OneSign SSO 4.5-27 (virtual appliance) Vital Images Vitrea Core 6.0 Update 02 VMware Reference Architecture Workload Simulator (RAWC )1.2.0.0 Windows XPe SP3 on Wyse Terminals
Assumptions: UIM has been pre-configured on the Vblock platform according to the installation guide.
LAN Configuration
VLANs The following figure shows the list of VLANs configured in each Vblock Platform and usable by UIM. The dVLAN## VLANs are used for the View desktops themselves.
Figure 14: Site B VLANs
Network Diagram
The pool was used for storing the View linked clones and replicas of the user desktops. The details of the storage pool are shown below.
RAID Groups named RAID Group 1 (RG1) and RAID Group 2 (RG2) were created. RG1 uses four (4) FC 15K RPM 450GB drives in a 3+1 RAID 5 and RG2 uses eight (8) SATA 7.2K RPM 2TSB drives in a 6+2 RAID 6 configurations.
LUNs from RG1 had FAST Cache enabled and were used to store the 15GB boot LUNs for the ESXi hosts and several 250GB infrastructure LUNs for general use by the environment. LUNs from RG2 did not have FAST Cache enable and were used to store the virtual desktop VM swap files.
VMware Datastores
Below is a picture outlining the details from a vCenter perspective for Site A. Site B was configured in the same manner.
Once the service offering is created, it is activated and placed in the UIM Service Manager for use in provisioning the resources. From the Service Manager, service offerings are provisioned (resources allocated and locked down) and activated (OS installed) as illustrated below.
Figure 33: Site B Resources
Datastores
Eight (8) 499GB datastores for storing the View Linked Clones and Replicas labeled Desktop_LUN_XX. One (1) 99GB datastore labeled SiteA_Gold used specifically to store golden images of virtual desktops, which are replicated asynchronously to Site B. A similar datastore is configured in Site B (labeled SiteB_Gold) and is replicated asynchronously to Site A. Three (3) 1TB datastores for storing the VM swap files for each virtual desktop. Three (3) 249GB datastores for storing the required infrastructure VMs.
The following figure shows the View Connection Servers and related configuration information.
Each View Connection Server configuration had to be modified to support the use of Cisco ACE SSL encryption offloading as shown below.
In addition, the event database was configured to log all the events occurring. The following figure shows the configuration details.
Scripted and/or manual procedures can be used to re-instantiate replicated golden desktop images, should the need arise.
Imprivata OneSign
OneSign appliances can be implemented as a physical 1U server or virtual appliance. For this RA, the OneSign appliances were deployed as virtual appliances using an OVF provided by Imprivata. To ensure local (per site) and remote (across site) availability, two (2) OneSign appliances were implemented in each site.
After the OVFs were deployed, a wizard guided us through the implementation, which included pairing the appliances into local and remote clusters, as well as configuration of a replication process to keep all appliances in sync with one another. Once the configuration tasks were completed, we connected to the web-based GUI to license the product (per user), configure Proximity Card settings, integrate with Active Directory, and create polices that handled the One-Touch login behavior. Below is an example of a Computer Policy that automatically launches the View Client and connects it to a View Connection Server at https://10.1.54.16. (This is
actually a virtual IP (VIP) on the Cisco ACE appliance; FQDN or IP addresses will work. We used both in our testing.)
Additionally, User Policies can be configured specific to authentication, password self-service, offline authentication, and RADIUS integration. Below is the User Policy we used for this RA, which enables password and proximity card authentication:
The primary objective of the test harnesses was to validate if an end-user would successfully obtain a desktop after a complete site outage event occurred. The results of these tests are considered subjective in nature, as they were witnessed. The first harness required a mechanism to generate load on two Vblock platforms simultaneously. The VMware Reference Architecture Workload Code (RWAC) was chosen as the mechanism or tool to be used. The second harness required the use of a proximity card and manual/human intervention. A proximity card (or prox card) is a generic name for contactless integrated circuit devices used for security access or payment systems. Test Harness #1 Using RAWC to generate load during site failure The RAWC workload runs on a Windows 7 or XP guest operating system and is executed on each desktop virtual machine on one or more ESXi hosts. The RAWC workload has a set of functions that performs operations on common desktop applications including Microsoft Office, Adobe Reader, Windows Media Player, Java, and 7-Zip. The applications are called randomly and perform operations that mimic those of a typical desktop user, including open, save, close, minimize and maximize windows, view an html page, insert text, insert random words and numbers, conduct a slideshow, view a video, send and receive email, and compress files.
The RAWC workload uses a configuration file that is created via the RAWC GUI and writes application open/ close times and any errors to log files in a shared network folder. Various test variables can be configured via the RAWC GUI, including a start delay for creating boot storms and density (delay between application operations), application speed, number of emails created and sent, and typing speed. For more information on RAWC, see the Workload Considerations for Virtual Desktop Reference Architectures by VMware. Below is a screenshot of the RAWC workload configuration used for this SA. This workload randomly loaded MS Word, Excel, Internet Explorer, PowerPoint and Adobe Acrobat for three (3) iterations.
This harness employed two (2) VMware View desktop pools per site. One pool was for active desktops and the other was for stand-by desktops. All of the linked clones were created from the same parent virtual machine. This configuration resulted in seventy-five (75) virtual desktops per datastore, well within VMwares best practice recommendation of 128 vDesktops per datastore. Common infrastructure components, such as Active Directory, DFS, DNS, DHCP, and VMware View Connection servers, as well as Imprivata SSO appliances did not share the same compute or storage resources as the virtual desktops. A vSphere cluster (outside of the Vblocks) consisting of two (2) ESXi hosts was used to host the RAWC workload generation tool, Exchange 2010 server, and Vital Images servers. Each desktop infrastructure service was implemented as a virtual machine running Windows 2008 R2.
Internet Explorer 8.0 VMware Desktop RAWC Workload Simulator 1.2.0 VMware Tools 8.3.2.2658 VMware View Agent 4.6.0.366101 * http://www.vmware.com/resources/techresources/10157
Active/Active Configuration
Multiple Automated/Floating (AF) virtual desktop pools were created in Site A for Site A users as their primary desktop and in Site B for Site B users as their primary desktop, thereby creating an Active/Active configuration. Additionally, multiple standby AF virtual desktop pools were created in each site to deliver AlwaysOn desktops.
Validation Results
The most critical metric for this virtual desktop validation is the amount of time it took to obtain a new desktop after a simulated outage occurred. In this envelope testing, the system was optimized such that obtaining a new desktop after site failure occurred within 30 seconds. The majority of this delay (~20 seconds) was spent waiting for the View Client to give up trying to connect to the previous View Connection server. Outside the scope of this effort is an extremely important metric for virtual desktop validation: the end-user application response time. Careful design considerations should be given to ensure the end-user response time for any application activity is less than three (3) seconds. Response time metrics were collected during the RAWC harness testing to illustrate load on the environment during failover. These results are displayed below.
The RAWC workload generation has started and the Cisco ACE is processing the requests for desktops by distributing the load across the two (2) HA Proxies within each site, based on the source IP of the RAWC launcher.
Figure 50: Cisco ACE Real Time Statistics for server farm
Mid-way through the test, the screenshots from within View Manager are captured to illustrate the number of remote/connected sessions.
Although application response time metrics were not critical to the success of this validation, the results were captured to illustrate load on the system.
The northbound Ethernet uplinks were disabled on Site B to simulate an outage. Almost immediately, the RAWC session launchers lose connection to their remote desktop sessions.
Since the remote desktop sessions for Site B have disconnected, we used RAWC to restart them. Cisco ACE accepted the View Server connection requests, determined that Site B was down, and automatically redirected the connections to Site A. Desktop sessions are restarted.
All 200 remote desktop sessions, originally connected to Site B, are now reestablished on Site A.
Once again, application response time metrics are captured to illustrate load on the system, but this time for the Site B workload running on Site A resources.
Once the user is authenticated (in this case, via Microsoft AD account), Imprivata SSO polices start the VMware View client and pass the credentials to enable a seamless login experience to the users virtual desktops in Site A.
The next series of screen shots illustrates the accessing of critical applications and files. First is Vitrea Cores VIS and a three-dimensional knee scan that was accessed via a web browser and manufacture plug-in. The Vitrea back-end application was housed at a separate site and was not subjected to our simulated outage.
Next, we accessed email via the MS Outlook client. The Exchange 2010 instance serving up the email is located at a separate site with Vitrea Cores VIS.
Files within a DFS-based share are the last items to be accessed. The files shares were located within each site, and DFSR (replication) was configured to ensure copies of files were distributed between the sites. GPO redirection was used to map the users My Documents or Documents folder to the DFS share.
The northbound Ethernet uplinks were disabled on Site A to simulate an outage. Almost immediately (3-10 seconds), the View client disconnects, and the Imprivata login window appeared (as shown above). The user then manually or using a proximity card re-authenticated themselves and Cisco ACE policies directed them to a standby desktop in Site B.
The next series of screen shots illustrate the accessing of applications initially tested. This time, however, they were accessed from the user desktop in Site B.
Additional Considerations
In any virtual desktop deployment, datacenter services such as backup, recovery, security, and business continuity need to be considered. These considerations may impose additional restrictions on scalability and performance. VCE provides in-depth discussions on solutions that address these use cases.
Conclusion
As computing devices replace paper charts and physician prescription pads, these endpoints (mobile and fixed), become safety-critical IT systems that must deliver the highest possible levels of reliability and availability to ensure patient safety. If a caregiver has to make a fast medical decision but cant access the patients records because of a service outage or computer problem, the situation can escalate into a Severity-1 event and the consequences can be quite serious. Unfortunately, the old device-centric approach to endpoint management makes it extremely difficultif not impossibleto protect every desktop, laptop, hospital computer cart, and mobile device in use. To overcome this challenge, healthcare providers need a new approach to point-of-care delivery: one that will enable them to modernize their IT infrastructures so they can improve patient outcomes and get the most from the millions of dollars they are investing in technology. This reference architecture for AlwaysOn Point of Care, a collaboration of the VCE company, Imprivata, and Vital Images, detailed a new reference design for delivering clinical desktops and patient care applications as non-stop services. In a failover situation, this new reference design provides the business continuity required for mission-critical desktop and application access within seconds. AlwaysOn Point of Care offers: Conversion to EHR causing rapid increase in distributed locations where point-of-care desktops MUST be available. Tier-1 critical desktop, providing fast recovery and application continuity during disasters. Point-of-care access that is more fluid than traditional PC experience. Session mobility, a required feature tied to patient care and clinical productivity. The ideal opportunity to rapidly roll out a fully managed desktop platform. An effective way to implement managed printing service. The end-user experiences: Desktops that are always on and that enable fast logon. A desktop that follows them in the event of failover. Access from any endpoint devices from anywhere. A familiar interface to sustain the same application workflow. In summary, AlwaysOn Point of Care offers a solution that is accessible as a non-stop service and available to clinicians wherever and whenever they need patient information.
Acknowledgements
Cisco, Imprivata, VitalImages, Wyse, EMC RTP Labs
References
VMware View Reference Architecture http://www.vmware.com/resources/techresources/1084 VMware Workload Considerations for Virtual Desktop Reference Architectures http://www.vmware.com/files/pdf/VMware-WP-WorkloadConsiderations-WP-EN.pdf VMware View http://www.vmware.com/products/view/ VMware vSphere 4 http://www.vmware.com/products/vsphere/ Cisco UCS http://www.cisco.com/go/unifiedcomputing Cisco Data Center Solutions http://www.cisco.com/go/datacenter Cisco Validated Designs http://www.cisco.com/go/designzone EMC Celerra Family http://www.emc.com/products/family/celerra-family.htm EMC PowerPath/VE http://www.emc.com/products/detail/software/powerpath-ve.htm HAProxy http://haproxy.1wt.eu/ Imprivata OneSign http://www.imprivata.com/onesign_platform Wyse Z90 http://www.wyse.com/solutions/vmware/index.asp
About VCE
VCE, the Virtual Computing Environment Company formed by Cisco and EMC with investments from VMware and Intel, accelerates the adoption of converged infrastructure and cloud-based computing models that dramatically reduce the cost of IT while improving time to market for our customers. VCE, through the Vblock platform, delivers the industrys first completely integrated IT offering with end-to-end vendor accountability. VCE prepackaged solutions are available through an extensive partner network, and cover horizontal applications, vertical industry offerings, and application development environments, allowing customers to focus on business innovation instead of integrating, validating, and managing IT infrastructure. For more information, go to http://www.vce.com.
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2011 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. Item No: VMW-RAG-REFARCHPARTNER-USLET-WEB