Vous êtes sur la page 1sur 15

Adobe LiveCycle ES2 Technical Guide Jayan Kandathil, Adobe Technical Marketing

Adobe LiveCycle ES in VMware Environments


Considerations and benefits of hardware virtualization with LiveCycle

Introduction
Companies looking to use enterprise applications have many decisions to make when considering the deployment and ongoing support of the applications. One consideration is whether to deploy into a hardwarevirtualized environment. Hardware virtualization involves hiding the specifics of a particular piece of server hardware from an operating system. It also involves the isolation of multiple instances of hardware-virtualized operating systems from one another.
Table of contents 1 2 4 6 7 Introduction Virtualization Technology VMware LiveCycle ES on VI3/vSphere 4 Deployment Architecture for Tests Tools Used for Testing Provisioning a VM for Adobe LiveCycle ES Performance Best Practices When Building VMs Conclusions References

The major benefits of hardware virtualization are: The ability to run multiple instances of disparate operating systems on the same host. Old legacy applications that only run on Windows 3.1 or Windows 95 can still be maintained alongside instances of Windows 7. Reduced server sprawl and the resulting energy and cost savings. Faster provisioning of server environments from weeks of man hours to a small number of hours. Quicker failover and disaster recovery. Higher server utilization. Software and application testing groups were once the primary users of virtualized environments for enterprise applications, but this is no longer the case. Virtualization technology has become mature and reliable. Enterprise IT groups now have enough confidence in the technology to deploy applications into virtualized environments for production use. This technical paper discusses the various aspects of deploying Adobe LiveCycle ES into a VMware ESX virtualized environment. This paper focuses on VMware ESX 3.5 (part of VMware Virtual Infrastructure 3) and 4.0 (part of vSphere). Other leading vendors are Microsoft, IBM (IBM POWER architecture), and Sun Microsystems (Sun/Fujitsu SPARC architecture). Tests conducted by Adobe at VMwares ISV Validation Lab show that Adobe LiveCycle ES can successfully leverage such VI3 and vSphere 4.0 technologies as VMotion, DRS and Fault Tolerance. Long-lived orchestrations that involve user tasks were not tested because of time constraints. Developing load test scripts for long-lived orchestrations is a major effort. LiveCycle ES2 was not tested because it was released later. Determining the performance differences between virtualized and physical machines was not a goal, and was not tested.

9 12

12

14 15

Virtualization Technology
Isolation (containment) of software execution environments has been a feature on mainframes for decades. VMware brought the technology to the x86 world circa 1999 and popularized it to such an extent that every major player in the industry now has a virtualization offering. In the x86 world, VMware continues to dominate the market.

B ackground
IT departments all over the world are today focused on server consolidation. These efforts are mainly driven by CIOs concerned about server sprawl and the resulting expenses related to electric power and data center space. Server sprawl wastes power because of the high heat dissipation from server components, especially CPUs, and the high-capacity air conditioners required to cool data centers. Server sprawl occurred because system administrators needed to physically contain applications so that one did not adversely affect another. The easiest way to accomplish this was to deploy each application on its own server. Also, since they could not confidently and reliably predict future growth, administrators tended to over-provision their server hardware, thus resulting in significant under-utilization [1]. Early attempts at controlling this server sprawl resulted in the widespread use of hardware server blades, which are essentially very thin servers housed inside a bigger chassis and mounted on a server rack. Later, containment was implemented in software, a technology we today call hardware virtualization.

T erminology
Although Sun Microsystems continues to use the term containers for isolated software execution environments, the popular term today is virtual machines, or VMs, especially in the x86 world. IBM uses the term logial partitions (LPARs) for its AIX operating system. Implementations of hardware virtualization can be grouped into two bucketsthose that provide operating system (OS) isolation (VMs) and those that provide only application isolation (virtual environments, or VEs). Virtual Machine (VM) A virtual machine is a representation in software of a physical machine. A virtual machine presents an abstraction of the CPU, memory, storage, etc., that would normally be present in a physical machine to the operating system and the applications that reside in it. Virtual machines require the installation of complete operating systems in each of the VMs. In the case of VMware, you can run different operating systems on the same host.

Figure 1- Virtual Machine Model

x861 In the x86 world, VMwares ESX Server and Microsofts Hyper-V are examples of this virtual machine model. VMwares implementation allows it to host VMs running Windows, Linux, Novell Netware, and Solaris x86. Microsofts implementation allows it to host Windows as well as Novell SUSE Linux Enterprise 10 VMs. Virtual Environment (VE) Also known as OS virtualization, virtual environments do not require the installation of separate operating systems on each of the VEs. As a result, only one kind of OS can be run on a given host. You cannot mix and match different operating systems on the same host like you can, for example, in a VMware ESX environment. However, Sun has technology that lets you run Solaris 8 and 9 containers on Solaris 10.

VE

VE

VE

VE

VE

Host OS Global Container

Hardware
Figure 2- Virtual Environment Model

x86 Parallels Virtuozzo is an example. RingCubes MojoPac is a VE implementation for Windows desktop operating systems such as Windows XP and Windows Vista. Paravirtualization This is a technology invented by Xen that allows certain operating systems such as Red Hat Enterprise Linux 5.x to become aware that they have been virtualized. Using special paravirtualized device drivers [2], this allows near native performance of VMs. None of the Windows operating systems support this technology. Typically, Paravirtualization implies modifying the operating system source code. VMware supports paravirtualization through its work on the paravirt_ops and virtual machine Interface (VMI) APIs. VMware also recommends that users install a set of tools in the Windows and Linux operating systems that enhance the performance of these systems in virtual machines running on ESX. These include enhancements to the networking stack and the graphics device drivers.

1 The CPU architecture designed and implemented by Intel. AMD also implements the x86 architecture. In contrast to RISC, this architecture is CISC (Complex Instruction Set Computing).

T hird-Party Certification of Isolation


One measure of the amount of isolation provided by a particular virtualization technology is the common criteria Evaluation Assurance Level (EAL) certification. EAL rankings range between 1 and 7. The higher the EAL rank, the higher the confidence level in the isolation among containers [3]. According to VMware, ESX 3 is EAL4+ certified. ESX 3.5 is currently undergoing EAL4+ certification. IBMs System z LPAR has EAL5.

C PU Vendor Support for Virtualization


Both Intel and AMD now provide processor-level support for virtualization. Intel calls their technology Intel-VT (Virtualization Technology). AMD calls theirs AMD-V (internal project name Pacifica). Newer Intel Xeon and AMD Opteron processors have this support.

V irtualization and Cloud Computing


Hardware virtualization is a key enabling technology for cloud computing. Amazons Elastic Compute Cloud (EC2) is built on the open-source Xen Hypervisor. In almost all cases, vendors that offer cloud computing services host the applications on a virtualized infrastructure.

VMware
Virtualization technologies tested and supported by Adobe for LiveCycle ES include VMware ESX, IBM LPAR, and Solaris Zones. VMware created the virtualization market in the x86 world. Their product offerings can be grouped under two buckets: hosted and bare-metal. Bare-metal virtualization offers better performance than hosted virtualization.

H osted Virtualization
Hosted virtualization requires a host OS which hosts other guest OSes that are contained in virtual machines.

OS (Red Hat Enterprise Linux 5.0)

OS (SUSE Enterprise Linux)

OS (Windows 2003 Serv er)

Host OS

Hardware
Figure 3- Hosted Virtualization Model

OS (Solaris x86)

Workstation This is VMwares first desktop offering. It is highly popular with quality assurance and testing groups who face the daunting task of creating and maintaining tens or hundreds of test environments simultaneously. Some of the more advanced features in virtualization, such as the ability to record and replay any actions happening in a virtual machine, appear first in the VMware Workstation product. VMware Server This is VMwares original server virtualization offering. It requires a full host operating system. Today, VMware Server is offered as a free download and as an ideal starting point for users to experience the benefits of virtualization. VMwares expectation is that those customers who would like to implement large-scale virtualization solutions can easily migrate from VMware Server to VMware Infrastructure.

B are-metal Virtualization
A bare-metal virtualization platform is a barebones operating system-like kernel called the hypervisor, usually with a management console that runs on top of the kernel. There is no host OS. In VMwares case, the hypervisor is composed of a virtual machine monitor component and a kernel component, where the latter provides all of the hardware abstraction. These components are not based on traditional operating system technology, but were designed for efficiently managing virtual machines.

OS (Red Hat Enterprise Linux 5.0)

OS (SUSE Enterprise Linux)

OS (Windows 2003 Serv er)

Hypervisor Hardware
Figure 4- Bare-metal Virtualization Model

VMware ESX VMware ESX is part of VMware Infrastructure 3 (VI3). The latest version is called vSphere (v4.0). It does not require a host operating system and is itself a barebones operating system called a bare-metal hypervisor. By getting rid of the extra host OS layer, overhead is reduced and performance is significantly better than VMware Server and more suitable for large-scale virtualization deployment. VMware ESXi VMware ESXi has all the functionality of VMware ESX, but it removes the management console operating system (COS) that is available in VMware ESX for improved security and manageability. Due mainly to this, its memory footprint is only 32 MB. As a result, the entire hypervisor can fit on a single ROM chip. It is available as installable or embedded on a ROM chip. Server vendors are expected to ship this chip on the motherboards of the servers they sell. The ESXi installable is now free to download and use. This change in VMware policy and licensing was announced in late July 2008.

OS (Solaris x86)

Virtual Infrastructure 3 (VI3) and vSphere 4.0 While VMware ESX virtualizes the hardware resources on a single server, VI3 and vSphere 4.0 are capable of pooling all of the virtualized hardware resources on all of the servers in an entire data center. Instead of provisioning servers, IT departments can provision VMs from their pool of CPUs, storage, and network cards. In addition, VI3 and vSphere 4.0 contain management infrastructure that provides additional enterprise features to virtualized environments, such as live migration of active running VMs from one host to another (VMotion), VMware Distributed Resource Scheduler (VMware DRS) and VMware High Availability (VMware HA). vSphere 4.0 provides an additional feature called Fault Tolerance (VMware FT). These features are based on a separately licensed product called VirtualCenter. VirtualCenter requires its own dedicated server and a back-end database (Oracle or Microsoft SQL Server 2005) for storage of its management data.

Adobe LiveCycle ES on VI3/vSphere 4


Adobe tested LiveCycle ES 8.2.1 on VMware VI3 at VMwares Independent Software Vendor (ISV) Lab at VMware headquarters in Palo Alto, California, in July 2008. The primary goal was to determine how LiveCycle behaves in a VI3 environment, with regard to features such as VMotion, DRS and HA. LiveCycle ES was deployed to a WebSphere 6.1 cluster using Windows Server 2003, with Microsoft SQL Server 2005 as the database back end and WebSphere Proxy Server as the load balancer. During October 2009, we repeated the tests on vSphere 4.0 with a JBoss 4.2 cluster with MySQL as the database back end and Apache Web Server as the load balancer. Determining the performance delta between virtualized and physical machines was not a goal, and was not tested.

V Motion
VMotion is a VI3/vSphere 4 feature that allows the live migration of a running VM from one ESX host to another without the users experiencing disruption. This is a memory-memory transfer between ESX hosts and is usually completed within the range of 5-10 seconds. VMotion can be scheduled to run automatically, or executed manually. This technology has certain prerequisites. Mainly, the processor type for the source machine and target machine of a VMotion-based move must be in the same family (Intel or AMD) and compatible generations (Xeon or Opteron revision). For example, VMs running on an ESX host with Intel Xeon CPUs cannot be migrated to an ESX host with AMD Opteron CPUs that are designed with the Non-Uniform Memory Access (NUMA) architecture and vice versa [7]. Also VMs with four provisioned vCPUs cannot be migrated to an ESX host with only two pCPUs.

D istributed Resource Scheduler (VMware DRS)


This is a VI3/vSphere 4 feature that is based on VMotion and on the notion of resource pools, which are assigned portions of the processing power and memory of the collection of servers. VMware DRS allows administrators to balance the load across all of the ESX servers in a cluster of such servers. DRS will relieve those ESX hosts that are experiencing heavy loads by moving some of their VMs to other ESX hosts that are not as loaded in the same VI3/vSphere 4 cluster. This can be done in a manual, partial or fully automated fashion. Even if DRS is not enabled for a cluster of ESX servers, the administrator can carry out some rebalancing manually using VMotion. By running the esxtop performance tool on different ESX servers and making a decision based on that data, a VI3/vSphere 4 administrator can tell whether a particular ESX host is under severe load, and manually move, using VMotion, some VMs from that host to another less loaded host. This technique, however, places the burden of performance management on the administrator. Manual DRS In manual DRS mode, VirtualCenter measures the performance of the ESX servers under its control and suggests migration recommendations for virtual machines. The administrator can then choose to follow the recommendations and allow DRS to perform the rebalancing of load, or prohibit that rebalancing, at his or her discretion.

Partial DRS VirtualCenter will inform the system administrator that one or more of the ESX hosts are under heavy load and suggest migration recommendations for virtual machines. It does not initiate VM migration to other less loaded hosts except for initial placement of a VM at power on onto a DRS-enabled ESX cluster. Fully Automated DRS VI3/vSphere 4 will automatically initiate the migration of VMs from ESX hosts under heavy load to other less loaded ESX hosts. VI3/vSphere 4 checks whether the ESX hosts are under load every five minutes. This default behavior can be changed by editing the file vpxd.cfg on the VirtualCenter server [8].

H igh Availability (HA)


If an ESX host experiences hardware failures, dead VMs previously running on it are re-started on another ESX host in the HA cluster. This operation is fully automated for VMs preconfigured for HA.

F ault Tolerance (FT)


Fault Tolerance is a new feature only available with vSphere 4.0. When enabled for a VM, FT creates an additional secondary instance of this VM on another ESX host that is in virtual lockstep with the primary instance. If the primary VM experiences failures, the secondary VM on another ESX host will almost instantaneously take over. This operation is fully automated for VMs preconfigured for FT. FT is based on VMware Record/Replay technology. This is a much more robust solution than HA because no server startup, nor the associated delay, is involved.

D eployment Architecture for Tests


The deployment architecture that was used for the testing of Adobe LiveCycle ES 8.2.1 with the WebSphere cluster at the VMwares ISV Validation Lab is shown on the next page. Everything ran in VMs except the load generator. LiveCycle ES 8.2.1 was deployed on a two-node WebSphere (64-bit) ND 6.1.0.17 cluster with a WebSphere ND Proxy Server load balancer. The database used was Microsoft SQL Server 2005 SP2. LiveCycles Global Storage Directory (GSD) was hosted by a VM. All storage was on a storage area network (SAN) device.

Figure 5- Adobe LiveCycle ES Deployment Architecture

Tools Used for Testing


Benchmark Orchestration for LiveCycle ES Adobe Technical Marketing (eTech) has developed a benchmark orchestration of several LiveCycle ES services chained (or orchestrated) together as a single service. It is primarily an index of the performance of the processors on the system under test (SUT). This means that the number, clock speed and other capabilities of the processors will have significant impact on the reported transaction throughput numbers.

Figure 6- Screenshot of the eTech Benchmark Orchestration

As the screenshot of the orchestration shows, the following actions are performed serially: Read an XML file from the server filesystem. This XML file contains form data. Set the contents of this XML file to a process variable of type XML. Pass this data to the Forms ES component of LiveCycle along with a form template (.XDP) from the LiveCycle Repository (database). Keep the resulting PDF form in a process variable of type document. Read another PDF file from the server file system. Using Assembler ES, combine the previously created PDF form and the PDF into a single PDF. Apply a Rights Management ES policy to the combined PDF. Certify this PDF with a digital signature using a document signing credential kept in the LiveCycle Trust Store. Apply Reader Extensions rights to the PDF. Remove the Rights Management policy that was previously applied. Remove the Reader Extensions rights that were previously applied. Convert the PDF to the PDF/A archival format and keep the result in a process variable of type document.

Servlet A servlet was used to invoke the orchestration synchronously. Once the orchestration finished executing, the resulting output document was retrieved from the process variable by this servlet and streamed back to the client. HP LoadRunner LoadRunner from HP is a load-testing tool. Using its scripting language, a simple script was developed and run to drive the servlet that invokes the orchestration. This was the load generator tool used for tests conducted at the VMware ISV Validation Lab. Borland SilkPerformer For tests conducted at Adobes Technical Marketing Lab, SilkPerformer from Borland was used as the loadgenerating tool. Using its Benchmark Description Language (BDL) scripting language, a simple script was developed and run to drive the servlet that invokes the orchestration. Other Test Collateral The XML data, the XDP form template and the PDF document used for the tests are attached to this document as PDF attachments.

T ests Performed
Several tests were conducted at VMwares ISV Validation Lab, as well as in Adobes Technical Marketing Lab to assess the performance of LiveCycle ES on VI3/vSphere 4. No antivirus software was run on any of the VMs involved in the tests. Antivirus software tends to have a negative impact on performance of up to 30% on both physical and virtual machines. Baseline Test A baseline test was conducted to establish a performance baseline with which to compare the results of other subsequent tests. Throughput achieved with the eTech Benchmark Orchestration for LiveCycle was 172 transactions per hour. One transaction is defined as a single invocation of the eTech Benchmark Orchestration by one user. Please note that the servers used for this baseline test were all virtualized. VMotion Test In tests that were conducted at VMwares ISV Validation Lab, a VM hosting Adobe LiveCycle ES 8.2.1 under load was successfully migrated from one ESX host to another without any transaction failures. Compared to the throughput achieved with the baseline test (172 transactions per hour), this test, which was conducted during the VMotion, achieved a throughput of 165 transactions per hour. The VMotion operation itself took about 1 minute. Distributed Resource Scheduler (VMware DRS) Tests Manual DRS Test This scenario was not tested because it is practically the same as partial DRS. Partial DRS Test In tests that were conducted at VMwares ISV Validation Lab, a VM hosting Adobe LiveCycle ES 8.2.1 under load was successfully migrated from one ESX host to another without any transaction failures. Compared to the throughput achieved with the baseline test (172), this test achieved a throughput of 161 transactions per hour.

10

Fully Automated DRS Test

Figure 7- VMware DRS Configuration Screen

DRS was configured to be aggressive (as opposed to being conservative) and fully automated. There are five degrees of aggressiveness that determine how aggressively DRS tries to balance resources across ESX hosts in a VI3/vSphere 4 cluster [8]. Adobe LiveCycle VMs running on an ESX host under high load were automatically migrated (VMotioned) to another ESX host that was under less load without any transaction failures. Compared to the throughput achieved with the baseline test (172), this test achieved a throughput of 161 transactions per hour.

High Availability (HA) Test


We experienced problems properly configuring WebSphere Proxy Server to quickly recognize LiveCycle VMs that have experienced problems. Therefore, this test was abandoned. However, our tests for the JBoss cluster with Apache Web Server as load balancer worked.

Fault Tolerance Test


In tests that were conducted at VMwares ISV Validation Lab with a LiveCycle JBoss cluster, a VM hosting Adobe LiveCycle ES 8.2.1 under load was successfully migrated from one ESX host to another without any transaction failures. In vSphere 4.0, Fault Tolerance can be configured only for those VMs that have just 1 vCPU. Since the recommended number of vCPUs for LiveCycle ES is two, production use of FT for LiveCycle will have to wait until vSphere 4.0 supports more than one vCPU.

Figure 8- VMware DRS Configuration Screen

11

Provisioning a VM for Adobe LiveCycle ES


If you are considering deploying LiveCycle ES on VMware ESX Server, your VMware Infrastructure system administrator will want to know the basic provisioning details such as memory, storage, and the number of CPUs. Based on testing we did with ESX Server 3.5 on an HP Proliant DL385 G2, here are the basic configuration details for a Windows guest OS VM with LiveCycle ES 8.2.1: two vCPUs (where the physical CPUs on the ESX server have a 3.00 GHz clock speed or higher); 4 GB of RAM minimum; and 30 GB of storage minimum. Once built, this VM can be easily cloned. Given the fact that LiveCycle installs can be quite challenging, this clonability is a very attractive feature. When cloning a VM, the clone will have to have its NetBIOS machine name as well as its IP address changed. Oracle WebLogic, Red Hat JBoss and Sun MySQL seem to handle cloning automatically. However, IBM WebSphere requires extra steps. These are documented at the IBM website. Oracle 10g requires that the Net Configuration Assistant be rerun. There are other workarounds available for IBM WebSphere. The following steps represent one of the workarounds: install WebSphere ND; do not create any profiles during the installation; apply the latest maintenance pack; back up the VM; and clone the VM. To add a cloned VM to an existing LiveCycle cluster, the following steps are required: create an application server profile (AppSrv01); federate the node to the cell using addNode.sh (or .bat); create an additional cluster member that is hosted on the new node; and configure it the same way as the other cluster members hosting LiveCycle, including the JVM arguments.

Performance Best Practices When Building VMs


Best practices provided below are based on VMware recommendations [7]. If possible, choose server hardware that has several high-speed processors at the latest revision with built-in support for virtualization. This hardware support includes Nested Page Tables (NPT) technology. AMD refers to NPT as Rapid Virtualization Indexing (RVI). Please note that ESX Server NUMA optimizations are enabled only on AMD systems with at least four CPU cores. AMDs virtualization support was more advanced than that of Intel until Intels Nehalem family of CPUs. Intels term for NPT is Extended Page Tables (EPT). If possible, run ESX Server 3.5 Update 1. This version of ESX Server supports NPT. If possible, run VMs with 64-bit OSes with large memory pages (2 MB) enabled. ESX Server uses NPT only for 64-bit operating systems. Using large pages is reported to be more memory efficient. Choose server hardware with multiple gigabit network interface cards (NICs) and team them for higher throughput and failover. Disconnect or disable all COM Ports, LPT ports, USB Controllers, floppy drives and CD/DVD drives from the LiveCycle VM. Provision at least two vCPUs for each LiveCycle VM so that it leverages VMware ESX Servers vSMP feature. Also, make sure that Windows boots the Symmetric Multi Processing (SMP) Hardware Abstraction Layer (HAL)/Kernel.

12

When building LiveCycle VMs, use independent, persistent disks for best performance. If possible, connect all VMs in the same LiveCycle configuration (DB servers, web servers, application servers) to the same vSwitch. If configured this way, network traffic between them will be transmitted in-memory rather than over the wire, which is slower. Disable screensavers and menu animations in LiveCycle VMs. Do not enable paravirtualization for Windows operating systems, because they currently do not support the virtual machine interface (VMI).

Licensing Considerations
Since Microsoft licenses Microsoft Office by installed instance, each VM hosting LiveCycle PDF Generator ES must be licensed for Microsoft Office. Since Adobe Acrobat is required for PDF Generator, it has to be licensed per VM also.

Ideal LiveCycle Use Cases for Virtualization


There are some LiveCycle ES use cases where virtualized deployment makes more sense than traditional non-virtualized deployments. One of them is PDF Generators ConvertPDF action. Another is stateless, request-response type usage of individual LiveCycle ES services such as Forms ES or Output ES.

LiveCycle PDFGenerator ES Native Document Conversion


In LiveCycle ES Update 1 (8.2.1), the ConvertPDF action of the LiveCycle PDF Generator ES service is singlethreaded when converting Microsoft Office and Open Office native documents to PDF. The operating system could schedule some associated system threads on a second CPU, but anything beyond two CPU cores will not be used. In LiveCycle ES2, this is multi-threaded except in the case of Microsoft Excel. Word and PowerPoint conversions are multi-threaded. In many data centers, sever consolidation efforts aimed at minimizing the number of servers are under way. The servers in these environments tend to be large, with four, eight or more CPUs. These environments are ideal candidates for deployment into a VI3/vSphere 4 infrastructure. Deploy multiple VMs of LiveCycle PDFGenerator. These VMs should be self-contained, each with its own appserver (JBoss) and database (MySQL). Failover in case of software or hardware failure will be handled by VI3/vSphere 4, along with a properly configured load balancer such as F5 BIG-IP.

LiveCycle PDFGenerator ES Native Document Conversion in UNIX Shops


LiveCycle PDF Generator ES outputs the highest fidelity PDFs from native documents when Microsoft Office is used in the conversion. Since Microsoft Office is supported only in Windows, those IT shops that want to run the rest of LiveCycle on Linux are forced to provision Windows boxes. With virtualization, it is possible to run PDF Generator ES on Windows and the rest of LiveCycle on Linux on the same hardware but in separate VMs. Please note that LiveCycle has components that are OS-specific. Separate LiveCycle installations might be required to implement this scenario.

Large Corporate SOA Architectures


When large multi-national corporations decide to centralize IT services under a service-oriented architecture (SOA), the IT organization will face a requirement where applications belonging to multiple departments under disparate regulatory frameworks have to run on the same hardware infrastructure but still be isolated from each other. A solution like VMware VI3/vSphere 4 is ideal to address this requirement.

Running Different LiveCycle Versions Simultaneously


If there is a requirement to run the 7.2 version of LiveCycle Forms ES as well as the 8.2 version of PDF Generator ES, it can be accomplished by running them on the same server but in separate VMs.

LiveCycle Clusters under Server Consolidation Restrictions


Multi-node LiveCycle clusters can be deployed in completely virtualized environments. This is an advantage in those organizations that have major server consolidation efforts under way.

13

Disaster Recovery
It is faster to fire up archived LiveCycle VMs than to build a server from scratch in the event of disaster. Instead of installing the OS, the J2EE application server and LiveCycle, the key virtual machines containing the LiveCycle install can be quickly cloned and deployed to a secondary failover site. If a disaster occurs in a primary site, the failover sites instances of the critical VMs can be started automatically by VMwares Site Recovery Manager system once the primary site goes down. This function allows for easy testing of the disaster recovery plan also.

Large Farm of Non-Clustered but Load-Balanced LiveCycle VMs


When only a few LiveCycle services are used in a simple request-response model with no requirement for keeping state (sessions), deploying LiveCycle in several non-clustered, self-contained VMs provides better utilization of servers with multiple dual-core or quad-core CPUs. Devices such as the F5 BIG-IP can loadbalance the traffic that is sent to these VMs. Deploying multiple VMs across multiple ESX hosts in a VI3/ vSphere 4 cluster also will provide protection against hardware failure.

Adobe LiveCycle ES VM

Adobe LiveCycle ES VM

Adobe LiveCycle ES VM

Adobe LiveCycle ES VM

Adobe LiveCycle ES VM

VI3 Cluster ESX Server (Hypervisor) Hardware ESX Host 1


Figure 9- LiveCycle on a Cluster of Multiple ESX Hosts

ESX Server (Hypervisor) Hardware ESX Host 2

Conclusions
There are several compelling reasons for deploying Adobe LiveCycle ES in a VI3/vSphere 4 environment. Applications involving Process Management and Workspace can be deployed on clusters that are virtualized. However, it should be noted that long-lived orchestrations that included user tasks were not tested. Although Adobes tests were conducted with IBM WebSphere ND and JBoss AS as the J2EE application server platforms, we expect the results to be essentially similar for Oracle WebLogic AS. LiveCycle ES works well with VMware VMotion, DRS, HA and FT. LiveCycle PDF Generator ES is a very good candidate for VI3/vSphere 4 deployment if Microsoft Office or Open Office native documents are being converted to PDFs. If calls to LiveCycle are essentially stateless, a farm of non-clustered but load-balanced LiveCycle VMs can be deployed on VI3/vSphere 4, taking advantage of High Availability offered by VI3/vSphere 4. Non-clustered LiveCycle VMs are not aware of each other.

Adobe LiveCycle ES VM

14

References
[1] Foxwell, H., Rozenfeld, I. Slicing and Dicing Servers A Guide to Virtualization and Containment Technologies, Sun BluePrints Online, Oct 2005 [2] Xen: Enterprise Grade Open Source Virtualization Inside Xen 3.2 Xen Whitepaper, 2006 [3] Phelps, J.R. How to Decide on a Linux Server Platform Gartner Research, September 28, 2007 [4] Day, B. Virtualization Trends On IBMs System P - Unraveling The Benefits In IBMs PowerVM Forrester Research, February 5, 2008. [5] Hochstetler, S., Castro, A, Griffiths, N, Ramireddy, N. and Tate, K., Getting Started With PowerVM Lx86 IBM Redpaper, 2nd Edition, May. 2008. [6] Cherry, M. Hyper-V Released Directions on Microsoft, July 21, 2008 [7] Performance Best Practices and Benchmarking Guidelines VMware Whitepaper, 2008 [8] DRS Performance and Best Practices VMware Whitepaper, 2008 [9] Gammage, B. Gartner Interviews Ian Pratt, Virtualization Visionary Gartner Research, August 11, 2008 [10] DRS Performance and Best Practices VMware Whitepaper, 2008 [11] Wolf, C., Lets Get Virtual: A Look at Todays Server Virtualization Architectures Burton Group Data Center Strategies In-Depth Overview, Version 1.0, May 14, 2007. [12] Irving, N., Jenner, M., and Kortesniemi, A., Partitioning Implementations for IBM eServer p5 Servers IBM Redbook, 3rd Edition, Feb. 2005.

For more information and additional product details: http://www.adobe.com/ devnet/livecycle/

We welcome your comments. Please send any feedback on this technical guide to LCES-Feedback@adobe.com. Adobe, the Adobe logo, Flex, LiveCycle, PostScript, and Reader are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States and/or other countries. All other trademarks are the property of their respective owners.

Adobe Systems Incorporated 345 Park Avenue San Jose, CA 95110-2704 USA www.adobe.com

2009 Adobe Systems Incorporated. All rights reserved. Printed in the USA. 11/09

Vous aimerez peut-être aussi