Vous êtes sur la page 1sur 51

Sizing Best Practices Guidelines

December 21, 2011

Table of contents
Acknowledgements...................................................................................................................5 Abstract......................................................................................................................................7
What is a Sizing?....................................................................................................................................7

General Sizing Best Practices Guidelines...............................................................................8 Hardware Configuration Considerations ................................................................................9


General Hardware Configuration Considerations.....................................................................................9 Power Sizing / Configuration Considerations.........................................................................................11 Cloud on Power.....................................................................................................................................14 System x & Blade Center Sizing / Configuration Design Considerations................................................16 System z Sizing / Configuration Design Considerations.........................................................................20

Software Sizing Considerations.............................................................................................22


General Software Sizing Considerations...............................................................................................22 Sizing Guidelines for Connections Solutions..........................................................................................22 Cognos 8 Business Intelligence.............................................................................................................23 Composite Solution Sizing ....................................................................................................................25 DB2 Solutions Sizing Guidelines...........................................................................................................25 Sizing Guidelines for Domino Solutions ................................................................................................26 Enterprise Content Management (ECM) Solutions Sizing Guidelines....................................................26 Sizing Guidelines for IBM Smart Analytics.............................................................................................27 Sizing Guidelines for ISV (Independent Software Vendor) Solutions......................................................27 Sizing Guidelines for Lotus Expeditor Solutions.....................................................................................32 Sizing Guidelines for Lotus Sametime...................................................................................................32 Sizing Guidelines for Quickr Solutions...................................................................................................32 Sizing Guidelines for Rational Solutions................................................................................................32 Sizing Guidelines for Rational Change Management ............................................................................32 Sizing Guidelines for Rational Jazz Products ........................................................................................33 Sizing Guidelines for Enterprise Modernization Products ......................................................................33 Sizing Guidelines for Tivoli Storage Manager Solutions ........................................................................34 WebSphere Portal Guidelines...............................................................................................................34 WebSphere Application Server Solutions Sizing Guidelines..................................................................35 WebSphere MQ Solutions Sizing Guidelines.........................................................................................41

Additional Information.............................................................................................................48 References...............................................................................................................................50 Feedback..................................................................................................................................51


Sizing Best Practices Guidelines IBM internal Use Only Page 2

Copyright IBM Corporation, 2011

Sizing Best Practices Guidelines

IBM internal Use Only

Page 3

Copyright IBM Corporation, 2011

Change History Version 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.10 0.11 0.12 0.13 0.14 0.15 Date 2008.08.18 2008.09.12 2008.10.15 2008.11.19 2008.11.21 2008.12.5 2008.12.22 2010.26.03 2010.12.04 2010.28.04 2010.11.02 2010.11.22 2010.12.13 2010.12.22 2011.12.21 Description Created Template Update and re-format document Updated with additional content Updated with additional content Format updates Logo update; added a feedback section System z updates Updated Tivoli, ISV and Additional Information sections Updated Additional Information section Added new sections on Rational Sizing topics and refreshed ISV, Power and System x sections Added additional Rational content Added content on Cognos BI, IBM Smart Analytics, and updated the DB2, ECM, and Power Sizing sections Added Composite Solution Sizing Updated content for SAP, Oracle and Cross Industry ISVs Updated content for Power i, System z, Connections Solutions, Lotus Expeditor and Lotus Sametime Added content on Cloud on Power

Sizing Best Practices Guidelines

IBM internal Use Only

Page 4

Copyright IBM Corporation, 2011

Acknowledgements
Consultative and collaborative efforts were contributed by the following members of the Global Techline Sizing Council (GTSC) and Global Techline Center of Excellence (CoE) team members: 2010 Project Team Gail Titus, Sizing Best Practices Project Team Lead, GTSC, ISV Sizing Team Mike Adair, GTSC, Information Management Team Dexter Charles, GTSC, Power Team Joe Ciervo, GTSC, WebSphere Team Anita Devadason, GTSC, ISV Sizing Team Poornima Seetharamaiah, GTSC, Rational Team

Management Sponsor Ilan Josset, Client Technical Manager , IBM Collaboration Solutions / ECM, Global Techline CoE

The following extended team members provided insight and/or content to the project: Viola Berg, System z Capacity Planning & Sizing Team, GTSC, Global Techline CoE Luanne Carlton, ISV Sizing Specialist, Global Techline CoE Regina Cason, System z Capacity Planning & Sizing Team, GTSC, Global Techline CoE Steve Clark, Software IT Specialist, Rational, Global Techline CoE Charles DeLone, Software IT Specialist, Lotus, GTSC, Global Techline CoE Terry Dimka, Client Technical Specialist, Power Systems, Global Techline CoE Deann Dye, Client Technical Specialist, Power Systems, Global Techline CoE Mike Garner, Software IT Specialist, Lotus - Enterprise Access and Client Technologies, Global Techline CoE Dariusz Gorczyca, Software IT Specialist, IBM Collaboration Solutions, Global Techline CoE Lewis Grizzle, Software IT Specialist - WebSphere - On Demand Sizing Specialist, GTSC, Global Techline CoE Phil Hardy, ISV Sizing Specialist, Global Techline CoE Kathleen Hibbert, Client Technical Specialist, Power Systems, Global Techline CoE Edward Huang, ECM Technical Sales Specialist, Global Techline CoE Robert Jump, IT Specialist, IBM/Oracle International Competency Center Azam Khan, ISV Sizing Specialist, Global Techline CoE Rich LaFrance, Client Technical Specialist, Power Systems, Global Techline CoE Larry Mial, I/T Specialist, System x, GTSC, Global Techline CoE Linda Miller, Software IT Specialist - WebSphere Portal, IBM WPLC, Global Techline CoE Michele Montagnino, I/T Specialist, System x, GTSC, Global Techline CoE Rolf Mueller, Client Technical Specialist, System z, Global Techline CoE Edward Ng, Software I/T Specialist - WebSphere - BPM - Global Techline CoE
IBM internal Use Only Page 5

Sizing Best Practices Guidelines

Copyright IBM Corporation, 2011

Diane Nissen, ISV Sizing Specialist, Global Techline CoE Bill Parrill, Software IT Specialist, Lotus, Global Techline CoE Patricia Raymond, ISV Sizing Specialist, Global Techline CoE YoungSil Rim, ISV Sizing Specialist, Global Techline CoE Kevin Wheaton, Software IT Specialist, Tivoli - Storage, Global Techline CoE Gracie Williams, Software IT Specialist, Lotus, Global Techline CoE John Williams, ISV Sizing Specialist, Global Techline CoE Janet Wong, Technical Sales Specialist, System Storage - Disk/Studies, Global Techline CoE

Sizing Best Practices Guidelines

IBM internal Use Only

Page 6

Copyright IBM Corporation, 2011

Abstract
This document identifies best practices for sizing analyses. The target audience is experienced sizers. The Techline Sizing Council has created prerequisite collateral in the form of "Sizing Roadmaps" for various hardware and software platforms. Field sellers may find the content to be a valuable reference since the scope includes presales hardware and software sizing disciplines. Capacity planning topics are excluded at this time. The objective of this document is to furnish sizing practitioners with critical factors that contribute to a comprehensive sizing analysis. The documents contents are a compilation of best practices based on successful outcomes, experience, and research that have proven to be reliable across hardware and software platforms. The document includes general and specific best practices that can be employed to enhance the thoroughness and quality of sizing recommendations and will allow sizers to heighten sizing consultative skills.

What is a Sizing?
A sizing estimate is an approximation of the hardware or software resources required to support an application set that is either currently implemented or new. It is a pre-sales effort aimed at addressing customer requirements. The level of effort and scope of sizing can range from small to very significant. Sizing does not constitute a performance guarantee.

Sizing Best Practices Guidelines

IBM internal Use Only

Page 7

Copyright IBM Corporation, 2011

General Sizing Best Practices Guidelines


Understand sizing guidelines, methodologies, and tools Engage a Hardware Platform Specialist early to discuss the specific solution requirements. Validate/distinguish sizing request between a Sizing, a Capacity Planning, or a Performance Analysis exercise, and the tools/methodology used in each unique scenario, as well as how it applies to this customer environment Always obtain a Sizing Questionnaire, if possible (validates customer input data) Determine if any application version changes (workload software versions) will change (updated to newer release) occur as part of this sizing If applicable, collect any performance data on existing workload to be sized on new hardware or in addition to new requirements Use IBM Sizing Tools where applicable, or ISV Sizing methodologies/tools, making sure you have the most current version of the sizing tool. Understand how Micro-partitioning affects the sizing and explain in output results Verify/validate O/S release levels sized at and to be used by customer in new environment Provide headroom and multiple sizings including a growth projection to position the solution for future growth or a workload beyond what the customer anticipated and provided us. Provide, if applicable H/A and/or D/R solutions to the sizing estimate Evaluate costs of security, systems management, and ISV software costs which have an impact on costs Detailed sizings are encouraged over high-level sizing estimates Tool accuracy - expected to be as good as +/- 30% Scrutinize the sizing input and look for things that appear to be out of range. Understand scope of sizing effort Consider the impact of parallelism on batch processing Set the correct expectations! Sizings are based on info from a point in time. Sizing must be an iterative process Sizing cannot account for some aspects, like customizations, interfaces, and ad hoc reporting. Combine sizing tool output with experiential information from working with clients, proofs of concepts and benchmarks. Document any assumptions used and provide these as part of the output deliverable. Deep understanding of the sizing patterns in the tool is important in order to articulate a comparison with the clients application Topics to consider concerning storage: storage release dependencies, code-set storage dependencies, storage growth, RAID overhead, storage capacity utilization targets, compression, volume Copy/Flashcopy, Non-production landscapes, Backup architecture and storage sizing based on performance/IOPS.

Sizing Best Practices Guidelines

IBM internal Use Only

Page 8

Copyright IBM Corporation, 2011

Hardware Configuration Considerations


General Hardware Configuration Considerations
Understand the hardware topology requirements (i.e. single, two, or three tiers). Determine the best method to achieve the desired topology (i.e. stand alone or partitioned environment). Understand what deployment standard is currently being used within the customer environment (i.e. standalone, dedicated/virtual partitions, micropartitions). Obtain current hardware specifications if possible, if sizing a migration or consolidation environment Understand scaling method desired - vertical or horizontal. Select a server model that accomplishes current and future scaling requirements for both processors and memory. Did the customer request specific models? Determine why the given model is it the best solution. Can a better alternative be proposed? Redundancy reduce single points of failure and increase reliability, availability, serviceability using redundant cores, adapters, and dual power supplies on servers/rack PDUs. Look for the most cost effective alternative to provide the customer. For example a 4 core requirement for performance has several options for differing degrees of High Availability. Options would be 2 x 4-core, 2 x 2-core, 3 x 2 core. Each option has its own price and performance implications. Understand hardware specific attributes, such as disk drive minimum requirements, memory per DPAR/logical partition, and N-way processor requirements Server target utilizations must be clearly documented When sizing, avoid using withdrawn hardware Release Level Compatibility Does the recommended sizing solution model support the operating system release level the customer can/will run? Does the recommended sizing solution model support the application version (IBM or OEM vendor application) If using an H/A or D/R solution are both systems scheduled to be on the same OS and Application release level, and if not, are the two different releases compatible? If upgrading either the Application or OS release as part of this sizing, are there performance differences that were taken into consideration in the sizing. Are their companion products (i.e. Sametime to Domino) in the customers solution that has release compatibility requirements? Memory Considerations Have minimum Memory requirements for this system been met with the Sized Solution? Have memory requirements for the OS and any other application workloads been taken into consideration? If using partitions, does each partition have sufficient memory, and is memory sharing going to be employed between partitions? Have you rounded up to the next logical memory increment in your recommendation, and taken memory growth considerations into account when choosing memory card sizes? Ethernet LAN Connections Do you need to make a recommendation on the number of Ethernet ports to accommodate this sizing?
IBM internal Use Only Page 9

Sizing Best Practices Guidelines

Copyright IBM Corporation, 2011

Domino solutions can typically support a given number of email users per Ethernet port, depending on the Network card speed? 100MBPS might typically support 1000-2000 Domino users per Ethernet port 1GBPS might typically support 3000-5000 Domino users per Ethernet port Does the solution need a dedicated Ethernet port for H/A or D/R replication?

Sizing Best Practices Guidelines

IBM internal Use Only

Page 10

Copyright IBM Corporation, 2011

Power Sizing / Configuration Considerations


General Sizing / Configuration Considerations
Multi N-way implementation considerations If separate partitions in this solution, consider whether shared versus dedicated processors are best suited for this solution sizing? Depending on the application, some jobs/applications may need dedicated processors Can the application sized here take full advantage of multiple processors, or are individual segments of the application better suited with each running on their own partition (more common potentially with multi-tiered solutions such as ERP) Have application licensing considerations been taken into account with N-way solutions? Some applications are priced based on the number of processors. If using partitions, does this application allow for sub-capacity pricing? Processor Value Units (PVU) some processors have a higher PVU price than others. Is micro-partitioning implemented; consider the PowerVM Edition that is best applicability to the environment when the server is deployed. PowerVM Editions offers virtualization technologies to increase server utilization When sizing for micro-partitions consider the absolute minimum size for a micro-partition is 0.1 processing units of a physical processor, but best practices should have a Desired amount of .25 for any partition. With a Desired Processor designation of .25 but a minimum of .1, any partition will be guaranteed at least .25 of a processor, but if partitions are set up for Sharing, any unused cycles will be made available to other partitions. So the number of micro-partitions you can create for a system depends mostly on the number of activated processors in a system. With a Sizing utilizing partition partitioning, consider the following.. Splitting a partition into a fractional partition (Micro-partitioning) causes the time slices available to be reduced to a given job. For example if a full processor allows 5 time slices, a .4 processor partition will allow only 2 of the 5 time slices to be dedicated (or used by) that partition thus causing potential response time issues. Care must be taken to evaluate this situation if recommending a fractional partition solution. See bullet above. On some Power System models, take into consideration the slow vs. Fast memory cards and how that might impact the sizing memory recommendation? CoD = Capacity On Demand: the ability to turn on and off extra processors on the system. There can be either permanent activations or temporary (On/off). Can the solution benefit from On Demand technology? Model Upgrade Capability Does the recommended solution involve a system model upgrade from an existing iSeries/System i or pSeries/System p? If so, invoke Global Techline hardware specialists to ensure all pre-requisites and migration plans fit into your sizing scheme here. Also verify that the sized operating system and application release levels are supported on model upgrade scenarios Is a Solution Assurance required on this upgrade? If so, invoke Global Techline hardware specialists
IBM internal Use Only Page 11

Sizing Best Practices Guidelines

Copyright IBM Corporation, 2011

Does the recommended solution provide ample growth for the Workload with available model upgrades? Is CoD upgrade capability with minimal disruption available for growth? (may be important to customer to have upgrade capability with little impact to a 24x7 operation) Is Temporary CoD available for peak periods beyond what is sized here? Can the workload benefit from a shared-processor pool environment? Shared-processor pool provides capabilities for micro-partitions to donate back unused cycles during nonpeak times. These resources can be utilized by other partitions on the system. Power Systems Hardware/Software specifics with regard to sizing: H/A and D/R Solutions If this solution is using either H/A or D/R, is this an Active/Active or Active Passive scenario? Is workload balancing being used If so did you size for both normal mode (workload split) as well as Failover mode (entire workload on 1 system) Can Processor (and potentially Memory) CoD be used at Failover and been sized for? Has RAC/Clustering or replication overhead been accounted for? Active Passive Scenario: Is the entire workload with cluster/replication overhead been sized for the Primary system? Has the Cluster backup system been sized for both Passive mode as well as Failover mode? Can processor (and potentially memory) CoD be implemented at Failover time Has a CBU model been given consideration and sized? H/A = High Availability: Providing live backup system for failover (live cutover) D/R = Disaster Recovery: Providing backup offsite system for system rebuild on temporary system 710/720/730/740 Memory Performance Considerations: More memory DIMMs equates to more memory bandwidth. Max bandwidth is achieved with all DIMM slots filled with max quantity of memory riser cards. Note for typical commercial applications on servers with typical usage levels, the incremental performance gained by max number of DIMMs vsa subset say of 50% DIMM slot utilization is very modest. Note that going from 2 DIMMs to 4 DIMMs doubles the memory bandwidth. Generally recommend at least 4 DIMMs (2 features or 1 quad) for each riserunless system is expected to be very lightly loaded With multiple risers, its usually best to balance memory Balance total GB per riser before balancing number of DIMMs per riser Note for typical commercial applications on servers with typical usage levels, differences of 2X between largest GB riser and smallest GB riser are very small performance impact. With two socket servers (730/740) generally best to spread memory between sockets Suggest having at least two risers, so that at least one riser associated with each socket and each riser has a quad of memory DIMMs unless system is expected to be very lightly loaded Observations: Adding memory requires the system to be powered down Active Memory Expansion can effectively expand memory capacity for AIX 6.1 and later for many application environments Active Memory Sharing Provides the ability to move memory from one partition to another.
Sizing Best Practices Guidelines IBM internal Use Only Page 12

Copyright IBM Corporation, 2011

This feature is best fit when partitions within a system have different busy times. Active Memory Sharing is supported with PowerVM Enterprise Edition and AXI, IBM i, and Linux partitions. Active Memory Expansion - Expand memory beyond physical limits; effectively up to 100% more memory. Uses compression/decompression to effectively expand the true physical memory available for client workloads. Actual expansion results dependent upon how compressible the data being used in the application and available CPU resource. Active Memory Expansion is supported only on POWER7 hardware with AIX 6.1 and Later. Note that Active Memory Expansion requires an HMC and HMCs are not used on POWER7 blades

Power i Sizing / Configuration Considerations


Engage a POWER i Specialist to discuss the specific solution requirements. If a 1-way Power i system is sized, does it provide enough CPW/MCU as well as enough CPU to provide adequate response time? Some low end 520 or models are actually 520 1-way models with internal LPARing to reduce CPW, but it also reduces CPU cycles. This may provide adequate CPW but cause response time issues Is the 1-way server large enough to provide response time and Multi-threaded job throughput? CPW alone is not the only sizing consideration Power i sizing utilizing LPAR partitioning, consider the following: Whether a 1-way or N-way solution is sized, any LPAR partitions with less than a whole processor may not have adequate CPW or MCU. Ensure that it provides enough CPW/MCU as well as enough CPU to provide adequate response time Is the fractional processor LPAR (Micro-partitioning) large enough to provide response time and Multithreaded job throughput? Micro-partitioning (or also known as fractional partitioning) will reduce the CPW (or MCU for Domino) proportional to the size of the partition processor. This may provide adequate CPW but cause response time issues. Some applications (like Domino) on Power i will recommend a minimum partition size regardless of what CPW or MCU is required to support the workload. For example for Domino solutions it is recommended to not have less than .5 of a processor in any partition. Websphere and web application workloads require 3800 CPW minimum to run, and that means an LPAR partition on a 1-way (a fractional processor partition) may not provide adequate CPU capacity. Disk Arm Considerations Does the sizing recommendation take into account minimum disk arms for performance (disk arm requirements may result in more capacity than required) Have you recommended the correct disk size (i.e. 139G vs. 284GB drives) that satisfies both disk arm minimums and capacity? If using an external disk/SAN solution has Disk Magic been used to assist in sizing disk solution? Have you also taken into account the speed of the disk controller in sizing? WLE has a disk attachment parameter which can be set which will affect disk arm recommendations. If planning on an external disk/SAN solution, have you taken into account the number of Fiber Channel adapters that would be required to satisfy both performance and LUN addressability limitations?

Power p Sizing / Configuration Considerations


Engage a POWER p Specialist to discuss the specific solution requirements.
IBM internal Use Only Page 13 Sizing Best Practices Guidelines

Copyright IBM Corporation, 2011

Memory bandwidth requirements. System performance that is dependent on memory bandwidth can be improved by installing two smaller features per processor card as opposed to one large feature per processor card. If sizing to upgrade a system, with Enterprise Power Systems, a processor card with a given memory feature can be mixed in the same CEC enclosure with a processor card containing other memory features. For all processors and all system configurations, if memory features in a single system have different frequencies, all memory in the system will function according to the lowest frequency present. Consult with a Power p Specialist on the installed server and current memory options if optimal memory bandwidth is critical. When implementing partitioning account for the resource overhead required by the system hypervisor. Rule of thumb is 8% of total system memory. As part of PowerVM there is an appliance server with which you can associate physical resources and that allows you to share these resources amongst a group of logical partitions. The Virtual I/O (VIO) Server can use both virtualized storage and network adapters, making use of the virtual SCSI and virtual Ethernet facilities. The VIO server will use CPU resources and for mission critical workloads, two VIO servers are recommended. If using the VIO server to manage micro-partition and share resources valid the selected Power p model will support requirements for the sized partition and VIO server(s). It is best practice to use separate volume groups for applications and user data in order to keep the rootvg volume group reserved for the AIX operating system. This makes rootvg more compact and easier to manipulate if required. This practice potentially impact internal disk requirements to support both the operating system and the application software.

Cloud on Power
IBM has two formal offerings for Cloud on Power: CloudBurst on Power Starter Kit for Cloud on Power These offerings are cross-brand solutions consisting of Power Systems, Storage Systems, and Software in pre-designed and tested solutions CloudBurst and Starter Kit are typically positioned for private cloud environments CloudBurst on Power range from minimum design with only a management node to a maximum design with 10 compute nodes. It is important to understand the compute node requirements and workload resource requirements. Storage requirements are also important considerations for a CloudBurst on Power design, validate storage needs. Complete the questionnaire to obtain a solution design: http://w303.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4370 Starter Kit for Cloud on Power is software offering from STG. IBM also created reference configurations around this offering. The reference configurations are based on a rack and blade form-factors. Understand the cloud environment the customer is requiring and verify support with Starter Kit In addition to the formal offerings, there are custom Cloud on Power solutions that IBM will size based on customer environment and specifications Techline Power Systems team should be engaged to assist on Cloud on Power requests
IBM internal Use Only Page 14

Sizing Best Practices Guidelines

Copyright IBM Corporation, 2011

Sizing Best Practices Guidelines

IBM internal Use Only

Page 15

Copyright IBM Corporation, 2011

System x & Blade Center Sizing / Configuration Design Considerations


Define the business Problem
Solution Design involves choosing the right hardware and software to solve a defined business problem, as well as provide some level of growth for future technology investment Fundamental challenge is to address the business problem(s) to be solved and design a balanced solution to address those problems Hardware and Software are only a means to an end A well designed solution should allow for growth within the time-scale and expectations the solution was scoped for

How to Scope a Solution Design


Define the problem Work to tie a monetary cost to each component of the problem Work to evaluate current system(s) performance (if applicable) Help to identify the phases in which the solution is to be implemented Identify expectations for the solution Identify success criteria Identify acceptance criteria Non-load related questions that directly impact sizing What are the Solution Availability Requirements (SLAs) What is the requirement for System Reliability? What are the Backup and Recovery (RPO, RTO) implementation and guidelines? Is Scalability a requirement? What is the measure of Performance? Cost What is the overall Solution Lifecycle? How will this solution fit into Disaster Recovery plan? Are there specific Security considerations? What are the Systems Management requirements? Storage Related Questions which can guide sizing Is there existing storage? How much total storage? How much of the total storage is being used? Types of storage technologies currently in use DAS, NAS, SAN, iSCSI, Tape, virtualization, etc. Are there additional platforms connected to the storage? Whats the growth rate of the storage requirements?
Sizing Best Practices Guidelines IBM internal Use Only Page 16

Copyright IBM Corporation, 2011

What is driving the growth rate (new applications, new services, multiple copies of data, etc.)? Related to questions about business initiatives above, what new or anticipated applications will drive storage growth? Are there Storage Area Networks currently installed? Do you anticipate being impacted by regulatory requirements related to data archiving?

Sizing Best Practices Guidelines

IBM internal Use Only

Page 17

Copyright IBM Corporation, 2011

System Design Fundamentals


What is a Balanced System? A balanced system is a server which has enough I/O and memory to keep the processor cores executing more work-related threads per clock cycle then system idle threads A balanced system is designed around optimally executing workloads rather then simply having one component run extremely fast Balanced systems are not measured by one simple metric (i.e. % CPU utilization, or % free MB) but rather in their capacity to process and respond to requests for information. The units of response could be transactions, web pages presented, or users logging into a system or any other metric used to measure a users interaction with a system.

Hardware Platform Decisions


Uni-socket systems Useful for applications that do not scale well across multiple sockets Useful when dont require large memory footprint Not recommended when require more advanced system redundancy features DP Systems Workhorse models designed for most enterprise applications that scale well up to 2 sockets and 12 cores Do applications scale well with faster FSB and core clock speed technologies? Useful when require more system redundancy and manageability features then available on uni-socket systems Useful for scale out solutions Useful when more internal storage is required MP Systems Targeted for consolidation, virtualization, database and enterprise applications Useful when require large memory footprints/memory scalability Implement when require systems designed for high levels of availability Useful for scale up applications Capable of hosting large amount of IO adapters/bandwidth Modular Traditional compute paradigm Recommended if require non-standard adapter configurations Recommended if require large memory footprints, internal disk, or isolated computing environment All cost associated with system included in purchase price Blade Compute node/shared infrastructure paradigm Optimized for scale out workloads Useful if power, cooling, and cabling requirements are of priority
IBM internal Use Only Page 18

Sizing Best Practices Guidelines

Copyright IBM Corporation, 2011

Useful for high density compute and memory capacity Recommend external boot (boot from SAN/iSCSI) to leverage platform benefits Cost associated with platform shared among blades

Sizing Best Practices Guidelines

IBM internal Use Only

Page 19

Copyright IBM Corporation, 2011

System z Sizing / Configuration Design Considerations


Engage an IBM System z Specialist to discuss the specific solution requirements and brand-specific characteristics of the System z. Encourage the IBM Account Team to engage the System z IT Architect (zITA) to assist with new workload application design. Better application design can lead to better performance. Although the terms capacity planning and sizing are sometimes used interchangeably, it is important here to be able to differentiate them. Capacity planning tools are typically more detailed, regularly used by customers, directly linked to performance collection/monitoring/management tools for existing systems, and provide data trending analysis. Keep in mind too that some tools provide a blend of both sizing and capacity planning functions. No matter what sizing tool is used, the accuracy of the sizing estimation produced is only as good as the input from the customer. Application Sizing analysis does not constitute a performance guarantee. Customers pay a premium for System z hardware and software; therefore, the accuracy of a sizing can be more important for System z than other platforms When doing a sizing for any large IBM System z processor, consideration must be given to handling the impacts of PR/SM. Sizing enablement and usage should be based on peak workload levels and not averages. It is most important to reflect customer workload requirements in this manner. Measurements (or estimates) of customer workload levels should be characterized during a 15 minute interval of peak demand, and not on daily/weekly/monthly average workload levels (as some trending tools provide). On IBM System z hardware, where many workloads tend to run in each system image, CPU utilizations can be 100% assuming all workloads dont have equal priority. Although the sizing tools provide processor MIPs, be careful how you use (or abuse) them. No existing MIPS table can adequately quantify the capacity of one System z processor against another. All MIPS tables, regardless of source (LSPR or otherwise), characterize the capacity of each processor with only a single number. Realizable capacity will be dependent on the type of workload(s) run, and on the specific LPAR configuration implemented. MIPS tables generally ignore these aspects of relative capacity. Does the customer plan to use z/VM? Ensure that the sizing estimate takes this into account. Will the target System z processor have multiple CPU engines? System control program(SCP) software is designed to manage multi-engine processors. Some portion of the instructions executed must go toward this N-way management function, as opposed to application work. Any use of instruction execution rates as a capacity indicator on these systems would include processing time that did not represent application work. The more CPU engines you have in the LPAR the higher the N-way management time. It is important to determine the best LPAR-to-engines mix for the sizing. Depending upon the System
IBM internal Use Only Page 20

Sizing Best Practices Guidelines

Copyright IBM Corporation, 2011

Control Program, it may be better to size the processor as two LPARs with fewer engines assigned to each LPAR instead of one LPAR assigned with all the engines. Will the target hardware be a z10 EC or zEnterprise Processor running z/OS, using HiperDispatch ? Workloads that are fairly CPU-intensive (like batch applications) will see only small improvements. Workloads that tend to have common tasks and high dispatch rates as often seen in transactional applications may see larger improvements, depending on the size of the z/OS images involved. The sizing should consider the effects of HiperDispatch for the specific workload to be run. No memory sizing recommendations are done for System z processors, however, guidelines on memory requirements are provided. Topics to consider - implications of parallel sysplex, hipersockets, DB release levels, hardware data compression, zVM, Specialty engines, LPAR overhead, growth and characteristics of an installed and/or new workloads. The future of application sizing has evolved toward solution sizing as opposed to a single software product sizing. The System z Brand sizing tools currently support SOA solution sizing. Stay Tuned.

Sizing Best Practices Guidelines

IBM internal Use Only

Page 21

Copyright IBM Corporation, 2011

Software Sizing Considerations


General Software Sizing Considerations
Engage a Hardware Platform Specialist early to discuss the specific solution requirements. Deep knowledge of application being sized is key Understand application benchmarks Understand user types, transaction types and weight factors Understand the workload to be sized. Avoid double-counting active users which causes over-sizing of an opportunity Understand Peak workload times when sizing (ensure input specs are for "Peak" times). Understand characteristics of peak workloads (online workload versus batch processing) Be able to identify bottlenecks (application, CPU, memory, disk, network) Data and message size can have a large influence on processing costs for: SSL Data encryption XML Parsing and XSL Transformation Web Page File Serving Static and dynamic files Message processing JMS WebSphere Message Broker (WMB) WebSphere Enterprise Service Bus (WESB WebSphere MQ

Sizing Guidelines for Connections Solutions


Use sizing questionnaire form to solicit assumptions Understand the platform architecture and how it operates with Connections and limitations associated with given hardware under specific situations Determine requirements for High Availability Determine need for Edge Proxy server sizing Never recommend hardware that doesnt allow for expansion Understand the input requirements for sizing Connections to have a clear understanding of how they affect the sizing Understand impact of the server sizing when accessing Activities with Lotus Notes Understand that Connections is a transactional application and based on frequency of use of a service rather than concurrent usage Determine customer expectations towards different user populations (internal/external users) and deployment plans, will they deploy both populations on the same or separate servers.

Sizing Best Practices Guidelines

IBM internal Use Only

Page 22

Copyright IBM Corporation, 2011

Cognos 8 Business Intelligence


Required Knowledge - IBM Cognos 8 Business Intelligence Suite (i.e. studios, user roles, BI content types, etc.), Can be obtain by taking basic Cognos course on reporting - IBM Cognos BI Administration (V10.1) - B5155 - Use cases for IBM Cognos solution that are specific to your projected implementation (i.e. named and concurrent users, batch report throughput, content retention times, audit requirements, etc.) - Common software and hardware offerings from IBM and 3rd party vendors (i.e. data sources, middleware, operating systems, server platforms, etc.) Understand the User Community Named users are all people who have access to C8 solution. - Active users are a subset of the Named user community who are logged on to the system and may or may not be interacting with C8 system at any given time. - Concurrent users are a subset of Active user community who at a given time are either submitting a request or already have a request in progress. Understand User Roles and Concurrency Understand the Application Access patterns - Are you planning to deploy this solution primarily as a live decision support BI application with users executing live queries throughout the workday? - Are you planning to deploy this solution primarily for rendering BI content off core business hours where most end users will access pre-rendered output? - Are you planning to deploy this solution to handle a mixed workload where interactive users will be executing live queries along (but not necessarily at the same time) with batch report execution as well as other BI content generation tasks such as Power Cube builds? Understand how Cognos BI content will be used in Portals, Cognos Dashboard or Cognos Connection. Using Cognos BI content in portal pages (either Cognos Connection or third party) or Dashboards developed with Cognos Go! Dashboard has impact on user concurrency when multiple portlets are used on one page to run reports on demand or retrieve pre-rendered report output. For example, when a user navigates to a portal page that hosts four reports, each in a separate portlet, it will increase report execution request concurrency four-fold at that moment in time. Understand Application Complexity - Query to Data Source - Output of Reports - Local Processing Local processing refers to processing of data (i.e. joining, aggregating, case logic, etc.) inside IBM Cognos application tier rather than on the database or within OLAP data source. Batch Throughput - Throughput for scheduled reports - batch or burst processing. Cube Building OLAP - Cube building performance is driven by both the query performance and time necessary to build the cube using query results. Metrics Content Management and Audit
Sizing Best Practices Guidelines IBM internal Use Only Page 23

Copyright IBM Corporation, 2011

Sizing Best Practices Guidelines

IBM internal Use Only

Page 24

Copyright IBM Corporation, 2011

Composite Solution Sizing


A Composite Solution Sizing aggregates solution non-functional requirements into a single document. Based on the Composite Sizing Method Developed by Techline, Composite Solution Sizing is not a tool, but a method that improves insight into the resources required across a solution. Why Composite Solution Sizing? The Composite Solution Sizing provides a capsulated view of requirements for the total solution Provided results improve correlation between Business Problem and Proposed Solution Reduces the steps required by the customer to submit a solution sizing request Identifies potential bottlenecks and/or overlaps in functionality A Techline representative is assigned ownership of the Solution Sizing Planning takes place prior to sizing execution to synchronize efforts and consider commonalities A single document is returned to the customer outlining all requirements for the solution Techline Composite Solution Sizing Process Techline Sizing Representative is assigned Solution Sizing Ownership Parent SR Owner takes lead on Composite Sizing Process Owner acknowledges requester and gathers any further details Owner oversees Child Requests creation and assignments If required, customize Composite Sizing Questionnaire specific to customer Owner coordinates meeting to outline sizing strategy; platform/model, blurring issues, etc. All sizings are returned to the solution sizing request owner, as well as the customer Owner aggregates results and creates Custom Customer Deliverable

DB2 Solutions Sizing Guidelines


Understand DB2 features, server and storage technology, changes to technology, resources, tools and rules of thumb Manage the end user expectations for the type of sizing that is being performed. In other words, the customer is expecting a capacity plan. Size disks by understanding I/O characteristics Size Balanced Systems for processor, memory and disks Always verify sizing assumptions by asking questions Identify any bottlenecks Engage Toronto Lab for all pureScale and XML sizings
IBM internal Use Only Page 25

Sizing Best Practices Guidelines

Copyright IBM Corporation, 2011

Use multiple tools, rules of thumbs and sizing methodology in producing a recommendation Set expectation levels for sizing z to workstation DB2 migration Always recommend InfoSphere Balanced Warehouse for warehouse applications. These solutions have been pre tested and pre sized

Sizing Guidelines for Domino Solutions


Understand the differences between the various Lotus Notes Clients the customer will employ (has substantial impact on the server sizing) Understand the platform architecture and how it operates with Domino, as well as any limitations with a given hardware platform in certain situations Distinguish between email and customer Domino application workloads, and size accordingly Distinguish between Casual, Moderate, or heavy client usage based on all specifications in Sizing questionnaire, not just the customer perceived workload. (i.e. Mail templates, Agents running, Full Text indexing, Local replicas, Port encryption, Roaming user etc.) Validate concurrency with customer if questioning the value provided by customer. (Customers confuse Concurrency with Complexity, and these are two distinct parameters.)

Enterprise Content Management (ECM) Solutions Sizing Guidelines


Use sizing questionnaire form to solicit assumptions Understand sizing guidelines, methodologies, tools and rules of thumbs (i.e. Model average system utilization to be less than 40%). Factor in system demands for services and tasks running in background or batch oriented (i.e. Capture subsystem, report ingestion Recommend workload management where applicable for optimizing hardware usage. (i.e. Migration to offline storage, creation of report bundles, email distribution and notification, etc. could potentially be scheduled during times of low system utilization). Understand the desired scaling method - vertical or horizontal. Select a server model that accomplishes current and future scaling requirements for both processors and memory. Did the customer request specific models? If not, use middle of the road server footprint, such as IBM System x3650 M3 (Intel/AMD) or Power 720 / Power 740 (IBM Power7) Keep in mind that using faster processor(s), such as Power7, does not necessarily warrant reducing the number of processors (cores) to support a described workload. Depending on the ECM application/solution being sized, it is typically I/O intensive and therefore using faster CPUs may simply improve performance (response time and throughput) by 5% to 10%. Use IBMs ECM performance benchmarks instead of industrys or IBM Powers hardware benchmarks to analyze performance.
IBM internal Use Only Page 26

Sizing Best Practices Guidelines

Copyright IBM Corporation, 2011

Understand full text indexing and content-based searches and retrievals (CBR) Understand storage implication in a large archiving solution

Sizing Guidelines for IBM Smart Analytics


Use sizing questionnaire form to solicit assumptions Understand sizing guidelines, methodologies, tools and rules of thumbs Factor in system demands for services and tasks running in background or batch oriented IBM Smart Analytics sizing is dependent on several factors User space Total space Concurrent users Total named users With POWER7, IBM Smart Analytics System users can optimize their current analytics workloads, delivering results faster with fewer resources and energy

Sizing Guidelines for ISV (Independent Software Vendor) Solutions


The Global Techline ISV Sizing Team provides sizing support for the following ISVs: SAP, Oracle (Oracle Database, Oracle eBusiness Suite, PeopleSoft Enterprise, Demantra, Oracle Transportation Management, JD Edwards EnterpriseOne, JD Edwards World, Siebel, Oracle Business Intelligence Enterprise Edition) and Cross Industry Applications (Lawson, Infor, ACI, SAS, Ariba) Understand and recommend ISV application architecture to achieve optimal performance. Depending on the ISV, a two-tier or three-tier architecture is supported and the combinations of the database, application and / or web tiers will vary. Provide sizing estimates for both new and installed ERP implementations. Provide a complete hardware solution with server systems (ranging from System x with Windows/Linux, Power i/AIX/Linux, and System z) and storage systems (ranging from DS to NAS) where applicable. Size for 50-90% target utilization, depending on the server platform, size of server and potential virtualization benefits. For System z, exploit WLM, Hipersockets, specialty engines. Recommend virtualization for non-production and production estimates when appropriate. Size storage based on total customer requirements and performance (include non-prod storage, if possible), capacity utilization(unused space), growth, storage codesets(Unicode, ASCII) Assume RAID and Flashcopy for storage Size for storage compression in System z Size for total systems--CPU, storage, memory, IO Engage FTSS and IBM Oracle and SAP International Competency Center resources, as required.
IBM internal Use Only Page 27

Sizing Best Practices Guidelines

Copyright IBM Corporation, 2011

Oracle Database
The Techline Sizing Team has a sizing methodology for custom applications that do NOT follow the sizing processes of other ISV applications. The nature of the Oracle Database workload is described as either OLTP (Online Transaction Processing) or Non-OLTP. The sizing estimate is only done for the production database tier. High availability is centered on Oracle RAC. There are no typical rules of thumb used since each application is custom. The WLE (Workload Estimator) tool that is used to model the Oracle Database, however, the tool has limitations and manual calculations must supplement the tool output. . The ISV Sizing Team does not handle sizing for disaster recovery environments at this time.

Oracle Insight
Oracle Insight software will capture performance statistics provided by a production Oracle 10g (or above) database. The captured statistics are written to the hard drive of a Windows PC dedicated for the collection process. When the collection process is complete, the statistical data is verified and compressed before being sent to IBM for analysis. A report will be sent by IBM to the customer detailing how the production database was utilized. The provided data collection tool is designed to have a minimal impact on the production database environment. The data collection tool is packaged as a Windows install image for convenient installation on a Win2000/WinXP PC. README documentation is packaged with the software. Sizing is only done for production at this time. The Insight for Oracle tool MUST be installed by the person who has DB Admin privileges on the system being monitored. The best results are calculated when the Oracle DB system is being used during month-end processing, therefore data collected during that window usually captures the best workload for sizing.

Oracle Applications Tier I (JD Edwards World, JD Edwards EnterpriseOne, OBIEE, Oracle eBusiness Suite, PeopleSoft Enterprise, and Siebel) I. Architectural Sizing Considerations
The sizing process takes into account the external clients production, non-production, high availability, disaster recovery, and security requirements. The sizing input from the client focuses on the requested implementation architecture, online users, batch workload, user growth, and business growth. Depending on the ISV, two-tier or three-tier architectures are sized. In some cases, other details that can be taken into consideration are reporting, ad-hoc query, and interfaces. Detailed focus is given to analyzing clients performance data for clients who already have the ISV package implemented and are looking to move to another platform, upgrade the ERP version and/or add new workload to their existing workload. Each ISV has its own unique application architecture. Non-production requirements vary by client. A typical nonproduction environment is 30% to 50% of the production workload. Some clients prefer to have a preproduction environment that is 100% of the production workload so they can test their production environment before migrating to production. High availability is achieved by PowerHA or Oracle RAC for Power systems and Oracle RAC, Microsoft Clustering or idle failover server for System x. Disaster recovery requirements vary by client. Some clients prefer to implement high availability / disaster recovery on separate physical servers at a dedicated disaster recovery site in the event the primary site is not
Sizing Best Practices Guidelines IBM internal Use Only Page 28

Copyright IBM Corporation, 2011

accessible. Typically, disaster recovery is sized at 50%-100% of production workload. Clients security concerns are addressed by duplicate number of web servers (one set of servers in front of the firewall and another set of servers in back of the firewall).

II. Rules of Thumb


Depending on the ISV and platform, rules of thumb for memory sizing are performed on a memory per core basis for production. Memory sizing in non-production environments must consider memory allocation for multiple database instances. The ISVs have typical online user concurrency rates based on the various application modules within the ISV package. Depending on the ISV, sizing recommendations include an overhead of 15% when Power Linux solutions are sized.

III. Sizing Best Practices


When an Oracle RAC solution is architected for production, it is recommended that a RAC solution also be implemented in the clients pre-production or non-production environment so the client can properly test the RAC functionality before migrating to production. Clients are encouraged to implement a highly available solution since these applications are mission critical. From a security perspective, separate LPARs or separate servers are added based on clients DMZ requirements. If a client needs close to or more than seven System x rack servers, the Sizing Team will typically also provide a System x BladeCenter solution so that clients can reduce their footprint, floor space, and benefit from the ease of manageability provided within the blade infrastructure.

Oracle Applications Tier II (OTM, Demantra, APS, and Hyperion)


The Sizing Team must receive specifications from Oracle for some of the Tier II applications which are factored into the IBM sizing recommendation. The sizing process includes production and non-production environments. The Non-production architecture typically mimics the production architecture. There are no rules of thumb since these are complex sizings where the clients requirements differ greatly.

SAP Insight
This software is designed to provide a convenient and high-level workload analysis for a production SAP system. It captures performance and workload statistics generated by the clients production SAP system(s). The captured statistics are written to the hard drive of the Windows PC dedicated for the collection process. When the collection process is complete, the data is packaged and sent to IBM for analysis. The Insight for SAP Analysis Report provides performance and utilization statistics for the ERP system as a whole, for each instance (application server) in the ERP system, and for each application server and database host in the ERP system. The collection tool is designed to have minimal impact on the clients production environment.

Sizing Best Practices Guidelines

IBM internal Use Only

Page 29

Copyright IBM Corporation, 2011

SAP I. Architectural Sizing Considerations


The SAP sizing process is very mature and includes the complete architecture taking into account the clients production, non-production, high availability, disaster recovery, and storage requirements. The sizing input from clients focuses on a particular implementation architecture, online users and batch workload. Some clients provide growth information additionally. Detailed focus is given to analyzing clients performance data for clients who already have SAP installed and are looking to upgrade the SAP version level and/or add new workload in addition to their existing workload. SAP application architecture can be two tier or three tier. Two tier architecture refers to combined database and application tiers in the same partition or server and the end user interface is in a separate partition or server. In a three tier, the database, application, and end user interface are all in separate partitions or separate physical servers. The sizing specialist will determine which tier is best suited for the client based on the workload and high availability requirements. It is quite common to architect two tier solutions as opposed to three tier solutions currently since servers are more powerful. Non-production is typically installed in a two tier environment. Non-production is sized as a percentage of the production workload or is based on the number of users per system. It is recommended that at least three different non-production landscapes (i.e. Development, Test, Training ) be implemented. High availability and disaster recovery can vary anywhere from 50% to 100% of production workload. Most clients prefer to have 100% of the resources for their production workload available for high availability and disaster recovery. High availability is achieved in a number of ways utilizing different technologies for SAP implementations. Most Power solutions are architected with PowerHA unless the client has indicated a preference for Oracle RAC. System x solutions are provided with Microsoft Clustering Solutions for clients who are using Windows. There are number of different Linux Clustering Solutions for clients interested in Linux. For clients who want to run SAP on DB2, less hardware resources are needed due to the benefits of DB2 V9 compression. Storage estimates are based on total customer requirements and performance (including non-production storage, if possible), capacity utilization (unused space), growth, storage code-sets (i.e. Unicode, ASCII). RAID and flashcopy.

II. Rule of Thumb It is recommended to include at least 20% overhead for Power Linux solutions.

III. Sizing Best Practices


Sizing Best Practices Guidelines IBM internal Use Only Page 30

Copyright IBM Corporation, 2011

SAP sizing estimates are heavily virtualized. Power solutions consist of PowerVM, hypervisor requirements, shared processor pools, and VIOS (Virtual I/O Server). Power AIX solutions include very detailed LPARs so that clients have more flexibility to adjust parameters as needed per LPAR. In general, Power i solutions consist of less partitions in comparison to Power AIX solutions because clients tend to not change parameters and consider the environment to be user-friendly. Power Linux solutions are architected with an overhead since it is not as mature as AIX and typically consumes more hardware resources. System x virtualization sizing estimates typically incorporate VMWare. System x virtualized solutions include virtual CPUs per Virtual Machine and memory per Virtual Machine. System x solutions assume that hyper-threading is used. System z sizing estimates include a detailed breakdown of the Central Electronics Complex (CEC), Coupling Facility (CF) specialty engine, and zVM overhead. SAP is supported on System zs zIIP and IFL specialty engines but is not supported on zAAP engines. System z solutions assume hardware data compression for storage estimates.

Cross Industry (ACI, Lawson, SAS) I. Architectural Sizing Considerations


The Lawson sizing process takes into account the clients production, non-production, high availability, and disaster recovery requirements. Production workload is virtualized wherever possible. Clients may have to reproduce the problem in a non-virtualized environment if there are problems in the production environment. Non-production is typically sized at 25% of production workload. Non-production typically includes two to three database instances consisting of Development, Test, and Training environments. Non-production environments are architected on separate LPARs or separate physical servers based on workload and hardware platform selection. High availability and disaster recovery environments are typically a duplication of production hardware resources. Some Cross Industry ISVs include storage with high level disk estimates of useable and mirrored disk capacity requirements. Others will combine certain tiers together for small workloads to minimize the hardware resource requirements.

II. Rules of Thumb


Memory sizing is based on memory per core based on the particular hardware platform selection. Some ISVs take into account 10% data growth and 10% user growth for three years and have minimum hardware requirements even for very small workloads. From a disk perspective, a minimum requirement of six physical disk drives is recommended.

III. Sizing Best Practices


In the Cross Industry ISV space, it is recommended to run non-production at a reduced capacity, if that is acceptable to the client, so that more hardware resources are can be utilized for high availability in the event of a failure. Partitioned solutions are recommended especially for production over stand alone physical servers. From a disk perspective, some of these ISVs recommend clients to have multiple smaller drives than fewer larger drives.

Sizing Best Practices Guidelines

IBM internal Use Only

Page 31

Copyright IBM Corporation, 2011

Sizing Guidelines for Lotus Expeditor Solutions


Ensure that sizing request is for supported systems and operating systems. Ensure that the requester understands the difference between initial provisioning workload and ongoing client maintenance for peak DMS operations Validate entries in sizing questionnaire with against expected values for based on types of applications to be deployed Validate any questions raised on review of questionnaire with requester

Sizing Guidelines for Lotus Sametime


Determine if the request is for a ST Advanced sizing and adjust process accordingly Determine requirements for HA for both IM and Web Conferencing Determine need for ST Gateway sizing Consider any platform/architectural requirements or impacts on sizing results. Distinguish any workload modifiers and determine impact on sizing. (encryption, recording, etc.) Ensure that listed peak number of concurrent users overall, peak concurrent users per conference, and peak number of web conferences make sense in regards to each other. Validate any questions raised on review of questionnaire with requester

Sizing Guidelines for Quickr Solutions


Understand the differences between the Quickr (Domino) and (J2EE) Understand the platform architecture and how it operates with Quickr, as well as any limitations with a given hardware platform in certain situations Distinguish between methods of use (browser, connector, blogs, discussion forums) Understand the storage requirements by means of use with the number of places, documents, rooms etc. Validate answers to questions in the questionnaire. Many responses here provide an overview of the whole solution.

Sizing Guidelines for Rational Solutions


Ensure the questionnaire is complete and the request is for supported operating system. Document customer requirements, assumptions and best practices in the deliverables. If necessary, discuss with the requester and the customer to resolve any outstanding issues.

Sizing Guidelines for Rational Change Management


Products covered: IBM Rational ClearCase and IBM Rational ClearQuest IBM Rational ClearCase and ClearQuest 7.1 releases introduce the Change Management (CM) Server, which provides server-side support for Wide Area Network (WAN) interfaces
IBM internal Use Only Page 32

Sizing Best Practices Guidelines

Copyright IBM Corporation, 2011

to Rational ClearCase and Rational ClearQuest. CM Server runs within the context of a unified application server that combines the IBM WebSphere Application Server and IBM HTTP Server to provide Web support for Rational ClearQuest Web (CQ Web) and Rational ClearCase Remote Client (CCRC). It leverages the performance, security and scalability of WebSphere Application Server (version 6.1.0.15). In order to optimize CM server scalability performance under large user load, performance tuning on CM server is required. For hardware planning, server systems with at least four CPU cores/thread and 4 GB memory for CM server deployment on the Windows and Solaris platforms are recommended. The CM Server is implemented as one or more J2EE applications hosted in the WebSphere Application Server (WAS). CM server load balancing with WebSphere Application Server Edge component is a viable solution for increasing CM server scalability vertically. Please refer to the IBM Rational ClearCase 7.1 CM Server Load Balancing Guide with WebSphere Application Server Edge Component white paper for setup and configuration details for CM server load balancing. In addition to the CM Server load balancing with WAS Network Deployment Edge Component in Version 7.1, ClearCase/ClearQuest and CM Server load balancing can be accomplished with the IBM HTTP Server that ships with the Rational ClearCase and ClearQuest software. Rational CM high availability is supported for asymmetric (active/standby) configurations, where Rational ClearCase/ClearQuest is active on only one cluster node at a time. By default, when a customer chooses "high availability" in the questionnaire, we add a server to each tier. High availability may depend on specific customer requirements which aren't determined by the sizing. When sizing CCRC, we assume Unified Change Management (UCM) Workload Scenario and 30% contingency factor.

Sizing Guidelines for Rational Jazz Products


Products covered: Rational Team Concert, Rational Quality Manager and Rational Requirements Composer Minimum hardware requirement for each Jazz Server is at least Intel Dual Core Xeon processor and 4 GB RAM. All of the sizing Rational Jazz products recommendations assume server class machines (for example, IBM System x or better) with a fast RAID or fiber channel disk subsystem. Even though the Jazz application and data can reside on a single server, it is a good practice to isolate data from the application. Therefore, a deployment that separates the application and database servers is highly recommended. Recommendation for network connectivity in dual-tier configurations is to minimize latency between the app server and DB server (no more than 1 - 2 ms). Our recommendation for storage for large scale deployments is to consider a Storage Area Network (SAN) or Network Attached Storage (NAS) solution. For best performance, it is recommended to install the database on a physical server as opposed to a virtual system (i.e. VMWare partition).

Sizing Guidelines for Enterprise Modernization Products


Sizing Best Practices Guidelines IBM internal Use Only Page 33

Copyright IBM Corporation, 2011

Product covered: Rational Host Access Transformation Services (HATS) IBM Rational HATS runs on Websphere Application Server (WAS) (a limited license copy is included). As a WebSphere application, HATS transforms 3270 and 5250 host screens. Its main prerequisite is WAS, so it requires the same hardware, OS, and software as WAS. HATS can be run on a single server, or on multiple servers. For multiple servers, the clustering capability of WebSphere Application Server-Network Deployment (WAS-ND) is required. Using WAS-ND, you can also take advantage of its deployment manager to ensure High Availability (HA). The HATS Toolkit is available as a free download. Any HATS application you create with it is limited to two connections. This means that up to two people can use the application and assess its value. Once you buy the HATS product, a key is provided (a .jar file) to allow more users. When sizing HATS, we use a 30% contingency factor to account for processor, disk, network or other unknown factors. We also project with a growth factor (generally 50%) to help with planning for the future.

Sizing Guidelines for Tivoli Storage Manager Solutions


Be conservative with the numbers that are used for performance calculations Recommend a configuration that will be between 40% and 60% utilized during the nightly backup to allow for margin of error in calculations Document assumptions that are made Never recommend hardware that doesnt allow for expansion (cpu, memory, adapters) Use the fastest processors that are available Utilize as many drives as physically possible, a larger number of smaller capacity drives are better for TSM performance. Understand what information is required from the customer to allow a sizing to be performed Get valid data from the customer Document, Document, Document detail the numbers and how conclusions are derived

WebSphere Portal Guidelines


Portal and WCM can be installed in several different configurations. Make sure you understand the differences between local and remote rendering both in terms of cost and performance, but also the technical differences between the two. The spreadsheet helps calculate the page per second rate, but often times you need to have a clear understanding of how all the input items affect the required workload calculations. Also when determining workload, there is often confusion on the differences between "active users during peak hour" and "concurrent users":
IBM internal Use Only Page 34

Sizing Best Practices Guidelines

Copyright IBM Corporation, 2011

https://w3.tap.ibm.com/weblogs/bills_sizing_blog/entry/websphere_portal_sizing_concurren t_users By default when a customer chooses "high availability" in the questionnaire, we add a server to each tier. High availability may depend on specific customer requirements which aren't determined by the sizing. This can range anywhere from cold or warm standby servers to highly available multi-cluster "gold standard" topologies. Our sizings determine what is required to meet the workload but may not factor these other functional requirements into the overall configuration. Non-production systems are not included in our sizings. The requirements for these are often based on business requirements as opposed to expected workload. For example, a QA server may range from being a duplicate of production down to a minimal system depending on what types of QA procedures the customer wants to have in place.

WebSphere Application Server Solutions Sizing Guidelines


Detailed sizings are encouraged over high-level sizing estimates Scrutinize the sizing input and look for things that appear to be out of range Deep knowledge of application being sized is key. Take the time to understand the application and requirements and refine the sizing input accordingly Deep understanding of the sizing patterns in the tool is important in order to articulate a comparison with the clients application Take the time to understand the application and requirements and refine the sizing input accordingly Understand what deployment standards are currently being used within the client environment (i.e. standalone, dedicated/virtual partitions, micropartitions). Understand the desired scaling method - vertical or horizontal. Select a server model that accomplishes current and future scaling requirements for both processors and memory. Did the customer request specific models? Determine why the given model is it the best solution. Can a better alternative be proposed? Redundancy reduce single points of failure and increase reliability, availability, serviceability using redundant cores, adapters, and dual power supplies on servers/rack PDUs. Look for the most cost effective alternative to provide the customer. For example a 4 core requirement for performance has several options for differing degrees of High Availability. Options would be 2 x 4-core, 2 x 2-core, 3 x 2 core. Each option has its own price and performance implications.

The following are excerpts from DeveloperWorks articles predominantly written by Tom Alcott, a consulting IT specialist and a member of the Worldwide WebSphere Technical Sales Support team. Links to complete articles are posted in the references section. Other authors will be noted as required. What's the minimum and maximum number of processor cores I need to run my applications?
IBM internal Use Only Page 35

Sizing Best Practices Guidelines

Copyright IBM Corporation, 2011

Modern operating systems generally do an excellent job of process scheduling, which minimizes context switching. Context switching occurs when the operating system scheduler replaces one process operating on a CPU with another process and is the result of a variety of factors, such as job or process priority, waiting for other resources such as I/O, whether or not the process is using up all of its allocated CPU cycles (time slice), and so on. However, "excellent" in this context is not "perfect," since each time a process is context-switched it stops operating which then results in throughput delays and performance degradation. If we really wanted to eliminate the possibility of context switching, we would need to have more CPUs on a system than processes running on that system. This unfortunately isn't always practical, since few organizations can afford to put that many CPUs on one system, Moreover it is unlikely to even be necessary, since most processes don't require continuous CPU access. We should instead look at this in terms of the most important processes, which in the context of WebSphere Application Server are the JVMs for the application servers running on a system. As a starting point, plan on having at least one processor core per application server JVM; that way you have likely minimized the number of times that a context switch will occur at least as far as using up a time slice is concerned although, as mentioned, there are other factors that can result in a context switch. Unless you run all your servers at 100% CPU, more than likely there are CPU cycles available as application requests arrive at an application server, which in turn are translated into requests for operating system resources. Therefore, you can probably run more application servers than CPUs. The final number will depend on the load, application, throughput, response time requirements, and so on, and the only way to determine a precise number is to have the customer run tests in their environment. For full article see reference 2. What about the maximum number of processor cores I can use for a WebSphere Application Server instance? A very efficient and well written application can use multiple CPUs with just a single application server process. In fact, the WebSphere Performance Lab has fully utilized 12 CPUs (and in some cases more) when running tests involving the Trade performance benchmark. While it is probably not reasonable to expect most applications to be this scalable, most well written applications should be able to use more than one CPU per application server (in fact, only using one CPU is often the sign of an application bottleneck).

In general, one should tune a single instance of an application server for throughput and performance, and then incrementally add clones and test performance and throughput as each clone is added. By proceeding in this manner one can determine what number of clones can provide the optimal throughput and performance for their environment. In general once CPU utilization reaches 75% little, if any, improvement in throughput will be
IBM internal Use Only Page 36

Sizing Best Practices Guidelines

Copyright IBM Corporation, 2011

realized by adding additional clones. For full article see reference 2. How much physical memory will I need? It is certainly reasonable to expect that you can run a higher multiple of application server instances per CPU in a development environment (where the load is likely only a single user per application server) than in a production environment. Beyond that, it is difficult to be more specific. It should be noted that adequate physical memory for all the application server processes is an equally important factor. A good rule of thumb is that the sum of all the WebSphere Application Server JVM processes should not exceed 80% of the otherwise unused physical memory on the server. When calculating the largest number that this figure can grow to, you must not only consider the maximum heap size, but also the process size of the Java bytecode interpreter (which is reflected in the OS process table) over and above the maximum heap size. The bytecode interpreter adds about 65MB to the process table (over the maximum heap size for a 128MB heap), and increases in size as does the maximum heap size does. Most products employ some simple rules of thumb in determining memory requirements. If you consider that a 32 bit JVM can consume anywhere from 1.5 GB to 1.8 GB of contiguous ram a good rule of thumb is 2 GB per JVM and weighing that against the statement above 2 GB per processor core. For full article see reference 2. Should I recommend a database or memory-to-memory replication for session failover? Performance does not differ significantly between database persistence and memory-tomemory replication. This is because 95% of the cost of replicating or persisting sessions is incurred in the serialization/de-serialization of the session object which must occur regardless of how the session is distributed. Also, as the size of the session object increases, performance degrades again, about equally for both session distribution options. Instead, the decision will be based partially on how the two technologies differ: With a database, you actually persist the data (to disk), so a highly available database server can survive a cascading failure, while using application servers as session stores and replicators for this purpose may not. In the case of a "gold standard" (two identical cells/domains), a highly available database can pretty much assure session failover between domains, while with memory to memory, there can only be a single replicator common to the two cells; hence, it becomes a single point of failure (SPOF). Thus, for configurations where cross-cell session failover is a requirement, a highly available database is the only option for eliminating a SPOF. Note that while sharing sessions across cells is supported, this is not generally recommended. By sharing state between cells, it makes it significantly more difficult to independently upgrade components (application and WAS) in the two cells. In the end, the decision then becomes based on what technology you are most comfortable with and which delivers the required quality of service for your availability requirements. For full article see reference 2.
Sizing Best Practices Guidelines IBM internal Use Only Page 37

Copyright IBM Corporation, 2011

Does memory-to-memory replication affect the amount of memory I have available? With memory-to-memory replication, the amount of session information you can store is bounded by the JVM heap size of your application server(s). Even with the advent of 64-bit JVM support in WebSphere Application Server V6.01, the maximum application server heap size is going to be significantly smaller than the amount of disk space you have available on a database server that is serving as a session store. Therefore, the leading opinion is that database persistence remains the best option, although in many organizations it is more expedient to use memory-to-memory replication to avoid conflicts over roles and responsibilities between system and database administrators. For full article see reference 2.

Should I recommend a 32-bit or a 64-bit WebSphere Application Server? 64-bit does not automatically provide better performance; in fact, most applications will see little benefit. Applications that experience the greatest benefit are: Memory constrained -- The extra memory provided by 64-bit supports a better caching strategy, enabling the application to avoid expensive queries, and so on. Computationally expensive code, such as numerical analysis, algorithms, and so on. These can, by virtue of 64-bit registers, perform computations with fewer instructions than with 32-bit registers. If the applications meet the criteria above, then you might want to recommend testing in a 64-bit environment to see if there is value. Keep in mind that those moving from 32-bit to 64-bit often see no performance benefit, and instead simply experience a larger memory footprint; this, because 64-bit addresses are twice the size of 32 bit addresses footprint. The larger memory footprint also fills up L2/L3 caching more quickly, which can have a negative impact on performance. The larger memory footprint means that your customer should plan on adding additional RAM to the server recommendation. The exact amount of additional RAM required will depend on how much RAM they are currently using and how much free memory they currently have. In some cased you will end up having to double the RAM on the server in order to run a 64-bit JVM with the same heap size as the current 32-bit JVM. As an example, one recent client was comfortably running a 32-bit application server JVM with a maximum heap of 768 MB on a system with 2 GB of RAM, but when they moved to a 64-bit JVM, they ended up going to 4 GB of RAM for the same sized application server JVM.

Sizing Best Practices Guidelines

IBM internal Use Only

Page 38

Copyright IBM Corporation, 2011

You might wonder "How can a JVM with a maximum heap of 768 MB result in a process footprint of more than 768 MB? That doesn't make sense!" The heap is just one portion of the JVM, there's also the interpreter, which adds anywhere from 20% to 50% to the maximum heap size in terms of process footprint, depending on operating system, JVM, and heap size. As a result, for this customer, the 32-bit JVM with a 768 MB maximum heap that had a process footprint of ~950 MB for their hardware and software configuration, required ~1.9 GB on a 64-bit JVM -- which, with only 2 GB of RAM, didn't leave any memory for the operating system or other processes running on the system. For full article see reference 3 What about resource Virtualization? The lure of improved resource utilization is what leads to pitfalls in server virtualization. More specifically, over-committing the available physical resources -- CPU and memory -in an attempt to maximize server utilization is what leads to ineffective virtualization! In order to effectively utilize server virtualization, its paramount to recognize that underlying the virtual machines is a set of finite physical resources, and once the limits of these underlying resources are reached, performance can quickly degrade. While its important to avoid over-committing any physical resource, two resources in particular are key to effective virtualization: CPU and physical memory (RAM). As a result, it is essential to avoid over-committing these two resources. This is actually no different than in a nonvirtualized environment or, stated another way: virtualization doesnt provide additional resources. CPU over-commit In testing by the WebSphere Performance Lab and VMware, it turns out that when there was a single application server JVM per virtual machine, performance degraded once the number of virtual machines exceeded the number of CPUs; in other words, performance degraded once the number of application servers exceeded the number of CPUs. While the degradation was gradual (at least initially), once the ratio of virtual machines to CPUs exceeded 1:1, performance started to degrade more rapidly. The amount of degradation is in inverse proportion to the client workload; the lighter the client workload (meaning the longer the think time between client requests), the less the amount of degradation. If youre contemplating a CPU over-commit scenario using virtualization, then youll need to test and carefully monitor response time to make sure you dont over-commit to the point that performance degrades significantly. When testing, youll need to test all the virtual machines simultaneously and the workload should represent your peak workload, not your average workload. Otherwise, if several applications peak at the same time, you could encounter some very dissatisfied customers as the result of unacceptable response times. Its likely best to limit any CPU over-commit configurations to development environments where response time is less critical and load is light. Related to this, if youre using VMware, ESX server CPU utilization needs to be measured using ESX, not via the OS tools inside the virtual machine. This is because VMware is abstracting the OS hardware to the virtual machine, and even with VMware tools installed in the guest OS, the only way to monitor overall system CPU utilization is to use ESX.

Sizing Best Practices Guidelines

IBM internal Use Only

Page 39

Copyright IBM Corporation, 2011

Memory over-commit Avoiding over-commit of the underlying physical memory between the virtual images is likely more important than avoiding CPU over-commit. While CPU over-commit typically results, at least initially, in a gradual degradation, the performance degradation associated with memory over-commit is much more pronounced (mentally picture someone falling down!!). There are a couple of reasons this is the case. When youre running Java and you over-commit on memory, either in a virtualized environment or a non-virtualized environment, the OS pages or swaps some portion of the memory associated with running processes to disk in order to improve the locality or locality of reference of the most recently used data to the CPU, and thus improves performance. Unfortunately, garbage collection in Java violates locality of reference since the purpose of garbage collection is to find and remove memory that hasnt been recently used. In order to do so, *all* memory within the JVM heap must be examined, and as a result, the entire JVM must be in physical memory (RAM). Therefore, when garbage collection runs, any portion of the heap that isnt in physical memory must be paged in, while the memory with some other processes must be paged out, all of which results in additional CPU and I/O load on the system in addition to the regular application workload and the garbage collection. Its no wonder that memory over-commit has devastating performance impacts when using virtualization, though avoiding memory over-comment in a non-virtualized environment is equally important. Be aware that memory over-commit can adversely impact performance in both virtualized and nonvirtualized environments. This often occurs because many dont realize that the process footprint of a JVM is larger than the maximum heap size. This occurs because, aside from the JVM heap where application code executes, theres an interpreter associated with each JVM. The interpreter maps the Java bytecode to the underlying OS implementation for I/O, graphics, and so on. Therefore, its important to guard against memory over-commit by monitoring the actual process footprint of your application servers using the tools appropriate for your OS, such as Windows Task Manager or free m, top or vmstat on UNIX or Linux. Other Virtualization pitfalls While over-commit of memory and CPU are likely the most prevalent problems that can occur when using server virtualization, they arent the only anti-patterns associated with this technology. If youre using server virtualization in your production environment and are concerned with high availability, then you need to make sure that you have distributed each application not just across multiple virtual machines, but that the virtual machines associated with a specific application are also distributed across multiple physical machines. Failure to do so results in a configuration where the physical machine is still a Single Point of Failure (SPOF). While modern hardware is incredibly reliable and fault tolerant, that doesnt preclude a hardware failure resulting in the loss of all frames (hardware partitions) on a machine and, in turn, a total loss of application availability. Another potential friction point when using server virtualization occurs when application virtualization is in use. Both these technologies provide for autonomic adjustment of various resources; CPU and memory in the case of server virtualization, application server instances, workload, and so on, in the case of application virtualization. In order to avoid conflicting decisions between server virtualization and application virtualization, the response cycles associated with each should be configured to steer clear of conflicts. Most often this requires lengthening cycle times or disabling some of the functions
Sizing Best Practices Guidelines IBM internal Use Only Page 40

Copyright IBM Corporation, 2011

associated with each technology. For full article see reference 2

WebSphere MQ Solutions Sizing Guidelines


Log file location
Locate the queue manager log on its own disk, particularly when you intend to process large messages or high message volumes (> 50 messages per second). When possible, allocate the log on a device with a battery-backed write cache. Such devices are now common in Storage Area Networks (SANs). If that is not practical, use the fastest local disk available -- for example, it is better to use a 10,000 RPM disk than a 6,000 RPM disk. The speed of the device on which the queue files are located is not so critical to performance. The queue manager uses lazy writes to the queues but synchronous writes to the log, so if you only have one high-performance disk, then allocate it to the log. The setup of the log file is different on Windows compared to UNIX. However, the way you specify the location is the same for both environments -- you use the ld option of the crtmqm command. If you are allocating specific disks for the queue file and log data these must be defined before the queue manager is defined. Create different file systems on different disks for the queue manager queue and log files. Use the facilities of the operating systems to allocate the file systems. Allocate the file system for the log on the best possible device available.

Level of log write


You can specify the method that the queue manager logger uses to reliably write log records. The method used is specified in the Log stanza of the queue manager configuration using the LogWriteIntegrity parameter. Possible values are: SingleWrite - Some hardware guarantees that if a write operation writes a page and fails for any reason, a subsequent read of the same page into a buffer results in each byte in the buffer being either the same as before the write, or the byte that should have been written in the write operation On this type of hardware (for example, a SAN write cache enabled disk), it is safe for the logger to write log records in a single write because the hardware assures full write integrity. This method provides the best performance. DoubleWrite - Default method used in WebSphere MQ V5.2 and available for backcompatibility purposes only. TripleWrite - Default method. When hardware that assures write integrity is not available, you should write log records using the TripleWrite method because it provides full write integrity. Systems running with high volumes ( > 1000 messages per second) will see little difference between SingleWrite and TripleWrite, because it is only the last 4k block in each log write that may be subjected to three writes.

Sizing Best Practices Guidelines

IBM internal Use Only

Page 41

Copyright IBM Corporation, 2011

If you are satisfied as the result of a discussion with your disk provider that the device on which one which the log is located can assure write integrity, then use SingleWrite for best performance. If you change the value, you must restart the queue manager to bring the change into effect.

Sizing Best Practices Guidelines

IBM internal Use Only

Page 42

Copyright IBM Corporation, 2011

Type of logging
WebSphere MQ provides two types of logging which are known as circular and linear. When using circular logging log file extents are reused once they no longer contain active log data. When using linear logging on the other hand log file extents will be continually allocated as required. Once a log is no longer used it then becomes available to be archived. If you need to be able to forward recover queue data following a failure or recover from media failure of the device containing the log you must use linear logging if you are dependent on WebSphere MQ to provide this level of protection for you. An alternative strategy is to use disk mirroring to mirror the log device. This is often a facility provided by a SAN. In this case you could use circular logging. For performance choose circular logging. Circular logging is the default option when creating a queue manager. The same considerations apply to both the Windows and UNIX environments when specifying the type of logging. Use the lc option on the crtmqm command to specify circular logging, and the ll option to specify linear logging. Although the type of logging can be specified in the qm.ini file of the queue manager, changes made there will not result in a change in behavior, as the type of logging cannot be changed once a queue manager has been created.

Log file extent size


The size of each log file extent is specified on queue manager creation and cannot be changed subsequently so it is important to get this right when first defining the queue manager. The log file size is specified in the same way for both Windows and UNIX. This is using the lf parameter of the crtmqm command. There is some difference in the default values though for the two platform types. In WebSphere MQ for Windows, the default number of log file pages is 256, giving a log file size of 1 MB. The minimum number of log file pages is 32 and the maximum is 65 535. In WebSphere MQ for UNIX systems, the default number of log file pages is 1024, giving a log file size of 4 MB. The minimum number of log file pages is 64 and the maximum is 65 535.

As long as you have the disk space you are recommended to allocate the maximum size.

Number of log file extents


Log file extents can be specified as primary or secondary. Primary extents are allocated and formatted by the queue manager when it is first started or when extra extents are added. Once a primary extent has been formatted it can be reused. Secondary log file extents are allocated dynamically by the queue manager when the primary files are exhausted. As these are formatted dynamically it is not recommended that you get to the point where you need them. However they are extremely useful if there is an unexpected peak in activity which results in all of the primary extents being filled (due to a long running unit of work for example). If the primary extents do fill and there are no more secondary extents available the queue
Sizing Best Practices Guidelines IBM internal Use Only Page 43

Copyright IBM Corporation, 2011

manager will resort to backing out uncommitted Units of Work. Given this behavior ensure that there are a reasonable number of secondary extents. The number of log extents can be specified on the crtmqm command using the lp flag for primary extents and ls flag for secondary extents or by using the LogPrimaryFiles and LogSecondaryFiles values in the Log stanza of the queue manager. For Windows the Log stanza entry of a queue manager is located in the Windows registry. On UNIX the Log stanza is located in the qm.ini configuration file of the queue manger. The minimum number of primary log files you can have is 2 and the maximum is 254 on Windows, or 510 on UNIX systems. The default is 3. The total number of primary and secondary log files must not exceed 255 on Windows, or 511 on UNIX systems, and must not be less than 3. Operating system limits can reduce the maximum possible log size. The number of extents needed depends on the amount of data to be logged and the size of each extent. A practical starting point could be values of LogPrimaryFiles=10 and LogSecondaryFiles=10.

Log buffer size


The Log Buffer is the amount of main memory used to accumulate log records that will be written out to disk. Log records are appended to the end of the used part of the log buffer. When the end of the buffer is reached, some serialization takes place that reduces the rate of data transfer to disk. A large buffer will not hit this limit as frequently as a smaller buffer. You can specify the size of the buffer in units of 4 KB pages using the LogBufferPages parameter of the Log stanza of the queue manager dependent on the platform on which the queue manager is located. The minimum number of buffer pages is 18 and the maximum is 4096. Larger buffers lead to higher throughput, especially for larger messages. If you specify 0 (the default), the queue manager selects the size. In WebSphere MQ Version 6.0 this is 128 (512 KB). If you specify a number between 1 and 17, the queue manager defaults to 18 (72 KB). If you specify a number between 18 and 4096, the queue manager uses the number specified to set the memory allocated. The value is specified in the registry on Windows and in the qm.ini file on UNIX. It can be changed after a queue manager has been defined. However, a change in the value is not effective until the queue manager is restarted. For performance specify the largest possible value. This will help writes of large amounts of log data and enable large messages to be written in a single log I/O. This will be at the cost of some increased storage usage. Using a large value will not hurt performance when writing small amounts of data, but if a small value is specified when writing large amounts of data it can result in the queue manager having to issue multiple writes to the log so impacting performance.

Sizing Best Practices Guidelines

IBM internal Use Only

Page 44

Copyright IBM Corporation, 2011

Number of concurrent applications


When processing persistent messages it is recommended to run many instances of the application concurrently in order to optimize the efficiency of the queue manager log. It is possible to have tens or hundreds of applications running concurrently without any detrimental impact on queue manager log performance. In this situation the overhead of a log write at commit time is shared amongst many applications.

Sizing Best Practices Guidelines

IBM internal Use Only

Page 45

Copyright IBM Corporation, 2011

Application processing and units of work


When processing persistent messages in an application you should ensure that all MQPUT and MQGET activity takes place within a unit of work, or sync point as it is sometime referred to, for efficiency purposes. Every MQPUT, MQGET, MQCMIT of a persistent message causes a log record to be created for recovery purposes. These records must be successfully written to the queue manager log for the application to correctly commit its updates. The timing and frequency of writes to the log will differ dependent on whether messages are processed in a unit of work or not. When applications issue all of their MQPUT and MGET operations within a unit of work the queue manager is able to allow multiple applications to concurrently process different messages on the same queue. When these applications issue a commit (MQCMIT) a shared lock can be taken against the queue. This means multiple applications can commit updates at the same time. This is good for message throughput. In this case all of the log records created during the lifetime of unit of work have to be successfully written to the log in order for the commit to complete successfully. This forcing of the log records only has to take place once per unit of work for each application. Applications which process persistent messages outside of a unit of work or sync point have to wait after each and every MQGET or MQPUT for the log record to be synchronously written to the log. This can be many times per application invocation. As individual I/O times can vary from half a millisecond when using a disk with a write Cache to 5-10 milliseconds for SCSI disks this can make a significant difference to overall performance. In addition, while that log write is taking place access to the queue is inhibited for other applications. This disrupts overall processing and is poor for message throughput. From a performance point of view this is why it is important that ALL applications process persistent messages within a unit of work. It takes only a single application to not adhere to the rule to have a detrimental impact on the performance of persistent message processing. However, being practical, there may be some situations where you want to write a message outside of the unit of work so that you are sure the write will take place regardless of the success or failure of the unit of work. This is sometimes used for audit purposes where we want to track a particular activity. In such cases functional requirements may override performance considerations.

Queue manager channels


The channels over which a WebSphere MQ queue manager receives and send messages to/from other queue managers can be run as trusted or Fastpath MQ applications. The attraction of running an MQ application as trusted is that it gives a performance benefit through reduced code path length. Fastpath applications run in the same process as the parts of the queue manager so making communication with the queue manager more efficient. This means that there is no process separation between the Fastpath application and the queue manager. By contrast, when running
Sizing Best Practices Guidelines IBM internal Use Only Page 46

Copyright IBM Corporation, 2011

as a non Fastpath or standard application an agent process called amqzlaa provides the separation between the MQ application and queue manager. In this case there is greater separation between the application and queue manager but at the cost of a performance overhead. As channels are WebSphere MQ product code and as such are stable you can freely run channels as trusted applications. There are a couple of considerations to bear in mind: If channel exits are used you may wish to reconsider running the channel as a trusted application as there is the potential for the exit to corrupt the queue manager if the exits are not correctly written and thoroughly tested. If you use the command STOP CHANNEL(TERMINATE) you should also reconsider running the channels as trusted applications. If your environment is unstable with regular component failure you may also wish to reconsider running the channels as trusted applications. To make the channels run as trusted there are two options. Specify a value of MQIBindType=FASTPATH in the Channels stanza of the qm.ini or registry file. This is case sensitive. If you specify a value that is not valid it is ignored. See below for how to do this for the Windows and UNIX environments. By choosing this option all channels within the queue manager will run as trusted. Set the environment variable MQ_CONNECT_TYPE to a value of FASTPATH in the environment in which the channel is started. Ensure that the setting MQ_CONNECT_TYPE=FASTPATH is present as an environment variable. This is case sensitive. If you specify a value that is not valid it is ignored. You are recommended to use only one of the options.

Sizing Best Practices Guidelines

IBM internal Use Only

Page 47

Copyright IBM Corporation, 2011

Additional Information
Power Systems Documentation http://publib.boulder.ibm.com/eserver/?topic=/iphc5/systemp_systems.htm

Active Memory Expansion: ftp://public.dhe.ibm.com/common/ssi/ecm/en/pow03037usen/POW03037USEN.PDF

Active Memory Expansion Wiki: http://www.ibm.com/developerworks/wikis/display/WikiPtype/IBM+Active+Memory+Expansion

eConfig: http://ftp.ibmlink.ibm.com/econfig/announce/index.htm

Learning Technologies STG Online http://lt.be.ibm.com/services/weblectures/dlv/Gate.wss? handler=Offering&action=index&customer=ibmintra&offering=stg

Techline Support via.. Email: Send email sizing request to Techline@us.ibm.com with specifics Voice: 1-888-426-5525 (follow prompts for software application or platform)

Techline Sizing Questionnaire Web site http://w3-03.ibm.com/support/americas/techline/sizewise.html

Workload Estimator (WLE) Web Site http://www-912.ibm.com/wle/EstimatorServlet

Sizing Best Practices Guidelines

IBM internal Use Only

Page 48

Copyright IBM Corporation, 2011

WLE Education Walkthru http://www-912.i bm.com/wle/EstimatorServlet Click on Help/Tutorials tab Click on specific WLE sessions under Tutorials group

Sizing Best Practices Guidelines

IBM internal Use Only

Page 49

Copyright IBM Corporation, 2011

References
[1] The WebSphere Contrarian: Effectively leveraging virtualization with Application Server Author: Tom Alcott http://www.ibm.com/developerworks/websphere/techjournal/0805_webcon/0805_webcon.html

[2] Everything you always wanted to know about WebSphere Application Server but were afraid to ask Author: Tom Alcott Part 1, June 2005 Part 2, December 2005 Part 3, June 2006 Part 4, December 2006 Part 5, July 2007

[3] Know your WebSphere Application Server options for a large cache implementation Author : Tom Alcott http://www.ibm.com/developerworks/websphere/techjournal/0801_alcott/0801_alcott.html

Sizing Best Practices Guidelines

IBM internal Use Only

Page 50

Copyright IBM Corporation, 2011

Feedback
Please send questions or comments regarding the content of this document to Gail Titus, Global Techline Center of Excellence ISV Sizing Team at gtitus@us.ibm.com.

Sizing Best Practices Guidelines

IBM internal Use Only

Page 51

Copyright IBM Corporation, 2011

Vous aimerez peut-être aussi