Vous êtes sur la page 1sur 49

IBM System z10 Enterprise Class (z10 EC)

Reference Guide

February 2008
Table of Contents

z/Architecture page 6
IBM System z10 page 8
z10 EC Models page 12
z10 EC Performance page 14
z10 EC I/O SubSystem page 15
z10 EC Channels and I/O Connectivity page 15
ESCON page 15
Fibre Channel Connectivity page 15
OSA-Express page 19
HiperSockets page 25
Security page 26
Cryptography page 27
On Demand Capabilities page 31
Reliability, Availability, and Security page 34
Availability Functions page 34
Environmental Enhancements page 37
Parallel Sysplex Cluster Technology page 38
Fiber Quick Connect for FICON LX Environment page 43
System z10 EC Configuration Details page 44
System z10 EC Physical Characteristics page 46
Coupling Facility – CF Level of Support page 47
Publications page 48

2
IBM System z10 Enterprise Class (z10 EC) Overview Just in time deployment of IT resources
In today’s world, IT is woven in to almost everything that a Infrastructures must be more flexible to changing capacity
business does and is consequently pivotal to a business. requirements and provide users with just-in-time deploy-
Some of the key requirements today are the need to ment of resources. Having the 16 GB dedicated HSA on
maximize return on investments by deploying resources the z10 EC means that some preplanning configuration
designed to drive efficiencies and economies of scale, changes and associated outages may be avoided. IBM
managing growth through resources that can scale to Capacity Upgrade on Demand (CUoD) provides a perma-
meet changing business demands, reducing risk by nent increase in processing capacity that can be initiated
reducing the threat of lost productivity through downtime by the customer.
or security breaches, reduce complexity by reversing the
IBM On/Off Capacity on Demand (On/Off CoD) provides
trend of server proliferation and enabling business innova-
temporary capacity needed for short-term spikes in
tion by deploying resources that can help protect existing
capacity or for testing new applications. Capacity Backup
investments while also enabling those new technologies
Upgrade (CBU) can help provide reserved emergency
that enable business transformation.
backup capacity for all processor configurations.
The IBM System z10™ Enterprise Class (z10™ EC) delivers
A new temporary capacity offering on the z10 EC is
a world-class enterprise server designed to meet these
Capacity for Planned Events (CPE), a variation on CBU.
business needs. The z10 EC provides new levels of per-
If unallocated capacity is available in a server, it will allow
formance and capacity for growth and large scale con-
the maximum capacity available to be used for planned
solidation, improved security, resiliency and availability to
events such as planned maintenance in a data center.
reduce risk, and introduces just in time resource deploy-
ment to help respond to changing business requirements. The z10 EC introduces a new architectural approach for
As environmental concerns raise the focus on energy temporary offerings that can change the thinking about
consumption, the z10 EC is designed to reduce energy on demand capacity. One or more flexible configuration
usage and save floor space when used to consolidate x86 definitions can be used to solve multiple temporary situa-
servers. Specialty engines continue to help users expand tions and multiple capacity configurations can be active at
the use of the mainframe for a broad set of applications, once. This means that On/Off CoD can be active and up to
while helping to lower the cost of ownership. The z10 EC is three other offerings can be active simultaneously.
at the core of the enhanced System z platform that deliv-

By having flexible and dynamic configuration definitions,


ers technologies that businesses need today along with a
when capacity is needed, activation of any portion of an
foundation to drive future business growth.
offering can be done (for example activation of just two
CBUs out of a definition that has four CBUs is accept-
able). And if the definition doesn’t have enough resources
defined, an order can easily be processed to increase the
capacity (so if four CBUs aren’t enough it can be redefined
to be six CBUs) as long as enough server infrastructure is
available to meet maximum needs.

3
All activations can be done without having to interact with For IBM WebSphere® Application Server and other Java™
IBM—when it is determined that capacity is required, no technology based solutions the IBM System z10 Applica-
passwords or phone connections are necessary. As long tion Assist Processor (zAAP) offers a specialized engine
as the total z10 EC can support the maximums that are that provides a strategic z/OS Java execution environment.
defined, then they can be made available. When configured with CPs within logical partitions running
z/OS, zAAPs may help increase general purpose proces-
A new z10 EC feature now makes it possible to add per-
sor productivity and may contribute to lowering the overall
manent capacity while a temporary capacity is currently
cost of computing for z/OS Java technology-based appli-
activated, without having to return first to the original con-
cations. Beginning with z/OS 1.8, z/OS XML System Ser-
figuration.
vices can also take advantage of zAAPs for cost savings.
The activation of On/Off CoD on z10 EC can be simplified
z/VM® 5.3 is designed to provide new guest support for
or automated by using z/OS Capacity Provisioning (avail-
zAAPs and zIIPs and includes:
able with z/OS® 1.9). This capability enables the monitoring
of multiple systems based on Capacity Provisioning and • Simulation support — z/VM guest virtual machines can
Workload Manager (WLM) definitions. When the defined create virtual specialty processors on processor models
conditions are met, z/OS can suggest capacity changes that support the same types of specialty processors but

for manual activation from a z/OS console, or the system don’t necessarily have them installed. Virtual specialty
processors are dispatched on real CPs. Simulating
can add or remove temporary capacity automatically and
specialty processors provides a test platform for z/VM
without operator intervention.
guests to exploit mixed-processor configurations. This
allows users to assess the operational and CPU utiliza-
Specialty engines offer an attractive alternative tion implications of configuring a z/OS system with zIIP
The z10 EC continues to support the use of specialty or zAAP processors without requiring the real specialty
engines that can help users expand the use of the main- processor hardware. This simulation also supports
frame for new workloads, while helping to lower the cost of z/VM’s continuing role as a disaster-recovery platform,
ownership. since a virtual configuration can be defined to match the
real hardware configuration even when real zIIP or zAAP
The IBM System z10 Integrated Information Processor processors are not available on the recovery system
(zIIP) works closely with z/OS, which manages and directs zIIPs can be simulated only on System z10 EC, IBM
work between CPs and the zIIP. It is designed to free up System z9® Enterprise Class (z9™ EC) and IBM System
general computing capacity and lower overall total cost z9 Business Class (z9 BC) servers. zAAPs can be
of computing for select data and transaction process- simulated only on z10 EC, z9 EC, z9 BC, IBM eServer™
ing workloads for Business Intelligence (BI), Enterprise zSeries® 990 (z990), and IBM eServer zSeries 890
Resource Planning (ERP), and Customer Relationship (z890) servers.
Management (CRM). The z10 EC also allows IPSec pro- • Virtualization support — z/VM can create virtual spe-
cessing to take advantage of the zIIP, making the zIIP a cialty processors for virtual machines by dispatching the

high-speed IPSec protocol processing engine providing virtual processors on corresponding specialty proces-
sors of the same type in the real configuration. Guest
better price performance for IPSec processing. IPSec is
support for zAAPs and zIIPs may help improve your total
an open networking standard used to create highly secure
cost of ownership by allowing available zAAP and zIIP
connections between two points in an enterprise.

4
capacity not being used by z/OS LPARs to be allocated Liberating your assets with System z
to a z/VM LPAR hosting z/OS guests running Java and Enterprises have millions of dollars worth of mainframe
DB2® workloads. zAAPs and zIIPs cost less than stan- assets and core business applications that support the
dard CPs, so this support might enable you to avoid heart of the business. The convergence of service oriented
purchasing additional CPs, thereby helping to reduce architecture (SOA) and mainframe technologies can help
your costs both for additional hardware and for software
liberate these core business assets by making it easier to
licensing fees.
enrich, modernize, extend and reuse them well beyond
The System z10 EC offers the Integrated Facility for Linux® their original scope of design. The z10 EC, along with the
(IFL) to support Linux and open standards. Linux brings inherent strengths and capabilities of a z/OS environment,
a wealth of available applications that can be run in a real provides an excellent platform for being an enterprise hub.
or virtual environment under the z10 EC. The System z Innovative System z software solutions from WebSphere,
platform, with z/VM, provides users with the ability to scale CICS®, Rational® and Lotus® strengthen the flexibility of
out, deploying hundreds to thousands of virtual Linux serv- doing SOA.
ers in one CEC footprint. The z/VSE strategy supports

integration between z/VSE and Linux on System z to help Evolving for your business
customers integrate timely production z/VSE data into new The z10 EC is the next step in the evolution of the System z
Linux applications, such as data warehouse environments mainframe, fulfilling our promise to deliver technology
built upon a DB2 data server. The mainframe offers a com- improvements in areas that the mainframe excels in—
prehensive suite of characteristics and features such as energy efficiency, scalability, virtualization, security and
availability, scalability, clustering, systems management, availability. The redesigned processor chip helps the z10
HiperSockets and security to enable and support new and EC make high performance compute-intensive processing
existing environments. a reality. Flexibility and control over capacity gives IT the
upper edge over planned or unforeseen demands. And
Numerical computing on the chip new technologies can benefit from the inherit strengths of
Integrated on the z10 EC processor unit is a Hardware the mainframe. This evolving technology delivers a com-
Decimal Floating Point unit to accelerate decimal floating pelling case for the future to run on System z.
point transactions. This function is designed to markedly
improve performance for decimal floating point operations
which offer increased precision compared to binary floating
point operations. This is expected to be particularly useful
for the calculations involved in many financial transactions.

Decimal calculations are often used in financial applica-


tions and those done using other floating point facilities
have typically been performed by software through the
use of libraries. With a hardware decimal floating point
unit, some of these calculations may be done directly and
accelerated.

5
z/Architecture

The z10 EC continues the line of upward compatible z/OS


mainframe processors and retains application compatibility With z/OS 1.9, IBM delivers functionality that continues to
since 1964. The z10 EC supports all z/Architecture®-com- solidify System z leadership as the premier data server.
pliant Operating Systems. The heart of the processor unit is z/OS 1.9 offers enhancements in the areas of security, net-
the new Enterprise Quad Core z10 PU chip which is specif- working, scalability, availability, application development,
ically designed and optimized for mainframe systems. New integration, and improved economics with more exploitation
features enhance enterprise data serving performance as for specialty engines. A foundational element of the platform
well as CPU-intensive workloads. — the z/OS tight interaction with the System z hardware and
its high level of system integrity.
The z10 EC, like its predecessors, supports 24-, 31-, and
64-bit addressing, as well as multiple arithmetic formats. With z/OS 1.9, IBM introduces:
High-performance logical partitioning via Processor
• A revised and expanded Statement of z/OS System
Resource/Systems Manager™ (PR/SM™) is achieved by
Integrity
industry-leading virtualization support provided by z/VM.
• Large Page Support (1 MB)
• Capacity Provisioning
z10 EC Architecture
• Support for up to 54 engines in a single image
• Rich CICS Instruction Set Architecture (ISA)
• Simplified and centralized policy-based networking
• 894 instructions (668 implemented entirely in hardware)
• Advancements in ease of use for both new and existing
• Multiple address spaces robust inter-process security IT professionals coming to z/OS
• Multiple arithmetic formats • Support for zIIP-assisted IPSec, and support for eli-
• Architectural extensions for z10 EC gible portions of DB2 9 XML parsing workloads to be
• 50+ instructions added to z10 EC to improve compiled offloaded to zAAP processors
code efficiency • Expanded options for AT-TLS and System SSL network
• Enablement for software/hardware cache optimization security
• Support for 1 MB page frames • Improved creation and management of digital certifi-
• Full hardware support for Hardware Decimal Floating- cates with RACF®, SAF, and z/OS PKI Services
point Unit (HDFU) • Additional centralized ICSF encryption key management
functions for applications
• Improved availability with Parallel Sysplex® and Coupling
z/Architecture operating system support
Facility improvements
The z10 EC is capable of supporting multiple operating
• Enhanced application development and integration with
systems. Each operating system environment exploits
new System REXX™ facility, Metal C facility, and z/OS
z/Architecture in a unique way and offers business value.
UNIX® System Services commands
Each new release further exploits the hardware architec-
• Enhanced Workload Manager in managing discretionary
ture.
work and zIIP and zAAP workloads

6
Commitment to system integrity Linux servers on a single mainframe running with other
First issued in 1973, IBM’s MVS System Integrity State-

System z operating systems, such as z/OS, or as a large-
ment and subsequent statements for OS/390 and z/OS
®
scale Linux-only enterprise server solution. z/VM 5.3 can
stand as a symbol of IBM’s confidence and commitment to also help to improve productivity by hosting non-Linux
the z/OS operating system. Today, IBM reaffirms its com- workloads such as z/OS, z/VSE, and z/TPF.
mitment to z/OS system integrity.
z/VM 5.3 is designed to offer:
IBM’s commitment includes designs and development
• Large real memory exploitation support (up to 256 GB)
practices intended to prevent unauthorized application
• Single-image CPU support for 32 processors
programs, subsystems, and users from bypassing z/OS
• Guest support enhancements, including a z/OS testing
security—that is, to prevent them from gaining access, cir-
environment for the simulation and virtualization of zAAP
cumventing, disabling, altering, or obtaining control of key
and zIIP specialty processors
z/OS system processes and resources unless allowed by the
• Support for selected features of the IBM System z10 EC
installation. Specifically, z/OS “System Integrity” is defined
• Comprehensive security with a new LDAP server and
as the inability of any program not authorized by a mecha-
RACF feature, including support for password phrases
nism under the installation’s control to circumvent or disable
• Enhancements to help improve the ease-of-use of virtual
store or fetch protection, access a resource protected by networks
the z/OS Security Server (RACF), or obtain control in an
• Management enhancements for Linux and other virtual
authorized state; that is, in supervisor state, with a protection images
key less than eight (8), or Authorized Program Facility (APF) • Integrated systems management from the HMC
authorized. In the event that an IBM System Integrity prob-
lem is reported, IBM will always take action to resolve it.
z/VSE
IBM’s long-term commitment to System Integrity is unique z/VSE 4.1, the latest advance in the ongoing evolution of
in the industry, and forms the basis of the z/OS industry VSE, is designed to help address needs of VSE clients
leadership in system security. z/OS is designed to help you with growing core VSE workloads and/or those who wish
protect your system, data, transactions, and applications to exploit Linux on System z for new, Web-based business
from accidental or malicious modification. This is one of solutions and infrastructure simplification.
the many reasons System z remains the industry’s premier
z/VSE 4.1 is designed to support:
data server for mission-critical workloads.
• z/Architecture mode only
z/VM • 64-bit real addressing and up to 8 GB of processor

The z/VM hypervisor is designed to help clients extend the storage

business value of mainframe technology across the enter- • System z encryption technology including CPACF, con-
figurable Crypto Express2, and TS1120 encrypting tape
prise by integrating applications and data while providing
exceptional levels of availability, security, and operational • Midrange Workload License Charge (MWLC) pricing,
including full-capacity and sub-capacity options.
ease. z/VM virtualization technology is designed to allow
the capability for clients to run hundreds to thousands of

7
IBM System z10 EC

IBM has previewed z/VSE 4.2. When available, z/VSE 4.2 is Everyday the IT system needs to be available to users
designed to support up to 32 GB of processor storage and – customers that need access to the company Web site,
more than 255 VSE tasks. line of business personnel that need access to the system,
application development that is constantly keeping the
environment current, and the IT staff that is operating and
z/TPF
maintaining the environment. If applications are not consis-
z/TPF is a 64-bit operating system that allows you to move
tently available, the business can suffer.
legacy applications into an open development environ-
ment, leveraging large scale memory spaces for increased The z10 EC continues our commitment to deliver improve-
speed, diagnostics and functionality. The open develop- ments in hardware Reliability, Availability and Serviceability
ment environment allows access to commodity skills and (RAS) with every new System z server. They include micro-
enhanced access to open code libraries, both of which code driver enhancements, dynamic segment sparing for
can be used to lower development costs. Large memory memory as well as the fixed HSA. The z10 EC is a server
spaces can be used to increase both system and appli- that can help keep applications up and running in the
cation efficiency as I/Os or memory management can be event of planned or unplanned disruptions to the system.
eliminated.
IBM System z servers stand alone against competition and
z/TPF is designed to support: have stood the test of time with our business resiliency solu-
tions. Our coupling solutions with Parallel Sysplex technol-
• Linux development environment (GCC and HLASM for
Linux) ogy allows for greater scalability and availability. The new
InfiniBand® Coupling Links (planned to be available 2nd
• 32 processors/cluster
quarter 2008*) on the z10 EC are rated a 6 Gbps and pro-
• Up to 84* engines/processor
vides a high speed solution to the 10 meter limitation of ICB-4
• 40,000 modules
since they will be available in lengths up to 150 meters.

What the z10 EC provides over its predecessors are


improvements in the processor granularity offerings,
more options for specialty engines, newer security
enhancements, additional high availability characteristics,
Concurrent Driver Upgrade (CDU) improvements,
enhanced networking and on demand offerings. The
z10 EC provides our IBM customers an option for contin-
ued growth, continuity, and upgradeability.

The IBM System z10 EC builds upon the structure intro-


duced on the IBM System z9 EC (formerly z9-109) – scal-
ability and z/Architecture. The System z10 EC expands
upon a key attribute of the platform – availability – to help
ensure a resilient infrastructure designed to satisfy the
demands of your business. With the potential for increased
performance and capacity, you have an opportunity to
continue to consolidate diverse applications on a single

8
platform. The z10 EC is designed to provide up 1.7** times GbE Ethernet features, and supports IP version 6 (IPv6) on
the total system capacity than the z9 EC, and has up to HiperSockets. OSA-Express2 OSN (OSA for NCP) is also
triple the available memory. The maximum number of Pro- available on System z10 EC to support the Channel Data
cessor Units (PUs) has grown from 54 to 64, and memory Link Control (CDLC) protocol, providing direct access from
has increased from 128 GB per book and 512 GB per the host operating system images to the Communication
system to 384 GB per book and 1.5 TB per system. Controller for Linux on the z10 EC, z9 EC and z9 BC (CCL)
using OSA-Express2 to help eliminate the requirement for
The z10 EC will continue to use the Cargo cage for its I/O,
external hardware for communications.
supporting up to 960 Channels on the Model E12 (64 I/O
features) and up to 1,024 (84 I/O features) on the Models Additional channel and networking improvements include
E26, E40, E56 and E64. support for Layer 2 and Layer 3 traffic, FCP management
facility for z/VM and Linux for System z, FCP security
HiperDispatch helps provide increased scalability and
improvements, and Linux support for HiperSockets IPv6.
performance of higher n-way and multi-book z10 EC sys-
InfiniBand coupling links with 6 GBps bandwidth are
tems by improving the way workload is dispatched across
exclusive to System z10 and distance has been extended
the server. HiperDispatch accomplishes this by recogniz-
to 150 meters. STP enhancements include the additional
ing the physical processor where the work was started and
support for NTP clients and STP over InfiniBand links.
then dispatching subsequent work to the same physical
processor. This intelligent dispatching helps reduce the Like the System z9 EC, the z10 EC offers a configurable
movement of cache and data and is designed to improve Crypto Express2 feature, with PCI-X adapters that can
CPU time and performance. HiperDispatch is available be individually configured as a secure coprocessor or
only with new z10 EC PR/SM and z/OS functions. an accelerator for SSL, the TKE workstation with optional
Smart Card Reader, and provides the following CP Assist
PUs defined as Internal Coupling Facilities (ICFs), Inte-
for Cryptographic Function (CPACF):
grated Facility for Linux (IFLs), System z10 Application
Assist Processor (zAAPs) and System z10 Integrated Infor- • Data Encryption Standard (DES)
mation Processor (zIIPs) are no longer grouped together in • Triple DES (TDES)
one pool as on the z990, but are grouped together in their • Advanced Encryption Standard (AES) 128-, 192-, and
own pool, where they can be managed separately. The 256-bit
separation significantly simplifies capacity planning and • Secure Hash Algorithm (SHA-1) 160-bit
management for LPAR and can have an effect on weight • SHA-2 256-, 384-, and 512-bit
management since CP weights and zAAP and zIIP weights • Pseudo Random Number Generation (PRNG)
can now be managed separately. Capacity BackUp (CBU)
z10 EC is designed to deliver the industry leading Reli-
features are available for IFLs, ICFs, zAAPs and zIIPs.
ability, Availability and Serviceability (RAS) customers
For LAN connectivity, z10 EC will provide a new OSA- expect from System z servers. RAS is designed to reduce
Express3 2-port 10 Gigabit Ethernet (GbE) Long Reach all sources of outages by reducing unscheduled, sched-
feature (planned to be available 2nd quarter 2008*) and uled and planned outages. Planned outages are further
continues to support OSA-Express2 1000BASE-T and designed to be reduced by reducing preplanning require-
ments.

9
z10 EC preplanning improvements are designed to avoid The z10 EC processor introduces IBM System z10
planned outages and include: Enterprise Class with Quad Core technology, advanced
pipeline design and enhanced performance on CPU inten-
• Flexible Customer Initiated Upgrades
sive workloads. The z10 EC is specifically designed and
• Enhanced Driver Maintenance
optimized for full z/Architecture compatibility. New features
– Multiple “from” sync point support
enhance enterprise data serving performance, industry
• Reduce Pre-planning to avoid Power-On-Reset
leading virtualization capabilities, energy efficiency at
– 16 GB for HSA
system and data center levels. The z10 EC is designed to
– Dynamic I/O enabled by default
further extend and integrate key platform characteristics
– Add Logical Channel Subsystems (LCSS)
such as dynamic flexible partitioning and resource man-
– Change LCSS Subchannel Sets
agement in mixed and unpredictable workload environ-
– Add/delete Logical partitions
ments, providing scalability, high availability and Qualities
• Designed to eliminate a logical partition deactivate/
of Service (QoS) to emerging applications such as
activate/IPL
WebSphere, Java and Linux.
– Dynamic Change to Logical Processor Definition
– z/VM 5.3 With the logical partition (LPAR) group capacity limit on
– Dynamic Change to Logical Cryptographic Coproces- z10 EC, z9 EC and z9 BC, you can now specify LPAR
sor Definition – z/OS ICSF group capacity limits allowing you to define each LPAR
with its own capacity and one or more groups of LPARs
Additionally, several service enhancements have also
on a server. This is designed to allow z/OS to manage the
been designed to avoid scheduled outages and include
groups in such a way that the sum of the LPARs’ CPU uti-
concurrent firmware fixes, concurrent driver upgrades,
lization within a group will not exceed the group’s defined
concurrent parts replacement, and concurrent hardware
capacity. Each LPAR in a group can still optionally con-
upgrades. Exclusive to the z10 EC is the ability to hot
tinue to define an individual LPAR capacity limit.
swap ICB-4 and InfiniBand hub cards.
The z10 EC has five models with a total of 100 capacity
Enterprises with IBM System z9 EC and IBM z990 may
settings available as new build systems and as upgrades
upgrade to any z10 Enterprise Class model. Model
from the z9 EC and z990.
upgrades within the z10 EC are concurrent with the
exception of the E64, which is disruptive. If you desire The five z10 EC models are designed with a multi-book
a consolidation platform for your mainframe and Linux system structure that provides up to 64 Processor Units
capable applications, you can add capacity and even (PUs) that can be characterized as either Central Proces-
expand your current application workloads in a cost-effec- sors (CPs), IFLs, ICFs, zAAPs or zIIPs.
tive manner. If your traditional and new applications are
Some of the significant enhancements in the z10 EC that
growing, you may find the z10 EC a good fit with its base
help bring improved performance, availability and function
qualities of service and its specialty processors designed
to the platform have been identified. The following sections
for assisting with new workloads. Value is leveraged with
highlight the functions and features of the z10 EC.
improved hardware price/performance and System z10 EC
software pricing strategies.

10
z10 EC Design and Technology The design of the MCM technology on the z10 EC provides
The System z10 EC is designed to provide balanced the flexibility to configure the PUs for different uses; there
system performance. From processor storage to the are two spares and up to 11 System Assist Processors
system’s I/O and network channels, end-to-end bandwidth (SAPs) standard per system. The remaining inactive PUs
is provided and designed to deliver data where and when on each installed MCM are available to be character-
it is needed. ized as either CPs, ICF processors for Coupling Facility
applications, or IFLs for Linux applications and z/VM
The processor subsystem is comprised of one to four
hosting Linux as a guest, System z10 Application Assist
books connected via a point-to-point SMP network. The
Processors (zAAPs), System z10 Integrated Information
change to a point-to-point connectivity eliminates the need
Processors (zIIPs) or as optional SAPs and provide you
for the jumper book, as had been used on the System z9
with tremendous flexibility in establishing the best system
and z990 systems. The z10 EC design provides growth
for running applications. Each model of the z10 EC must
paths up to a 64 engine system where each of the 64
always be ordered with at least one CP, IFL or ICF.
PUs has full access to all system resources, specifically
memory and I/O. Each book can support from the 16 GB minimum memory,
up to 384 GB and up to 1.5 TB per system. 16 GB of the
Each book is comprised of a Multi-Chip Module (MCM),
total memory is delivered and reserved for the fixed Hard-
memory cards and I/O fanout cards. The MCMs, which
ware Systems Area (HSA). There are up to 48 IFB links per
measure approximately 96 x 96 millimeters, contain the
system at 6 GBps each.
Processor Unit (PU) chips, the “SCD” and “SCC” chips of
z990 and z9 have been replaced by a single “SC” chip The z10 EC supports a combination of Memory Bus
which includes both the L2 cache and the SMP fabric Adapter (MBA) and Host Channel Adapter (HCA) fanout
(“storage controller”) functions. There are two SC chips on cards. New MBA fanout cards are used exclusively for
each MCM, each of which is connected to all five CP chips ICB-4. New ICB-4 cables are needed for z10 EC and are
on that MCM. The MCM contain 103 glass ceramic layers only available on models E12, E26, E40 and E56. The
to provide interconnection between the chips and the E64 model may not have ICBs. The InfiniBand Multiplexer
off-module environment. Four models (E12, E26, E40 and (IFB-MP) card replaces the Self-Timed Interconnect Mul-
E56) have 17 PUs per book, and the high capacity z10 EC tiplexer (STI-MP) card. There are two types of HCA fanout
Model E64 has one 17 PU book and three 20 PU books. cards: HCA2-C is copper and is always used to connect
Each PU measures 21.973 mm x 21.1658 mm and has an to I/O (IFB-MP card) and the HCA2-O which is optical
L1 cache divided into a 64 KB cache for instructions and a and used for customer InfiniBand coupling which in being
128 KB cache for data. Each PU also has an L1.5 cache. announced and made generally available in 2Q08.
This cache is 3 MB in size. Each L1 cache has a Transla-
Data transfers are direct between books via the level 2
tion Look-aside Buffer (TLB) of 512 entries associated with
cache chip in each MCM. Level 2 Cache is shared by all
it. The PU, which uses a new high-frequency z/Architecture
PU chips on the MCM. PR/SM provides the ability to con-
microprocessor core, is built on CMOS 11S chip technology
figure and operate as many as 60 Logical Partitions which
and has a cycle time of approximately 0.23 nanoseconds.
may be assigned processors, memory and I/O resources
from any of the available books.

11
z10 EC Models

The z10 EC has been designed to offer high performance The z10 EC has five models offering between 1 to 64 pro-
and efficient I/O structure. All z10 EC models ship with cessor units (PUs), which can be configured to provide
two frames: an A-Frame and a Z-Frame, which together a highly scalable solution designed to meet the needs
support the installation of up to three I/O cages. The z10 of both high transaction processing applications and On
EC will continue to use the Cargo cage for its I/O, support- Demand Business. Four models (E12, E26, E40 and E56)
ing up to 960 ESCON and 256 FICON channels on the
® ®
have 17 PUs per book, and the high capacity z10 EC
Model E12 (64 I/O features) and up to 1,024 ESCON and Model E64 has one 17 PU book and three 20 PU books.
336 FICON channels (84 I/O features) on the Models E26, The PUs can be characterized as either CPs, IFLs, ICFs,
E40, E56 and E64. zAAPs or zIIPs. An easy-to-enable ability to “turn off” CPs
or IFLs is available on z10 EC, allowing you to purchase
To increase the I/O device addressing capability, the I/O
capacity for future use with minimal or no impact on
subsystem provides support for multiple subchannels
software billing. An MES feature will enable the “turned
sets (MSS), which are designed to allow improved device
off” CPs or IFLs for use where you require the increased
connectivity for Parallel Access Volumes (PAVs). To sup-
capacity. There are a wide range of upgrade options avail-
port the highly scalable multi-book system design, the z10
able in getting to and within the z10 EC.
EC I/O subsystem uses the Logical Channel Subsystem
(LCSS) which provides the capability to install up to 1024
CHPIDs across three I/O cages (256 per operating system
image). The Parallel Sysplex Coupling Link architecture
and technology continues to support high speed links pro-
viding efficient transmission between the Coupling Facility
E64
and z/OS systems. HiperSockets provides high-speed
capability to communicate among virtual servers and logi- E56

Concurrent Upgrade
z9 EC
cal partitions. HiperSockets is now improved with the IP
E40
version 6 (IPv6) support; this is based on high-speed TCP/
IP memory speed transfers and provides value in allowing E26 z10 EC

applications running in one partition to communicate with


E12
applications running in another without dependency on
an external network. Industry standard and openness are
design objectives for I/O in System z10 EC. z990

12
The z10 EC hardware model numbers (E12, E26, E40, E56 z10 EC Model Capacity IDs:
and E64) on their own do not indicate the number of PUs
• 700, 401 to 412, 501 to 512, 601 to 612 and 701 to 764
which are being used as CPs. For software billing pur-
• Capacity setting 700 does not have any CP engines
poses only, there will be a Capacity Indicator associated
• Nxx, where n = the capacity setting of the engine, and
with the number of PUs that are characterized as CPs. This
xx = the number of PU characterized as CPs in the CEC
number will be reported by the Store System Information
• Once xx exceeds 12, then all CP engines are full capacity
(STSI) instruction for software billing purposes only. There
is no affinity between the hardware model and the number CP Capacity Relative to Full Speed
7xx = 100%
of CPs. For example, it is possible to have a Model E26 6xx ~ 69.35%
5xx ~ 51.20%
7xx
4xx ~ 23.69%
which has 13 PUs characterized as CPs, so for software xx = 01 through 12 6xx

billing purposes, the STSI instruction would report 713. 5xx

4xx

z10 EC model upgrades Sub Capacity Models


There are full upgrades within the z10 EC models and
upgrades from any z9 EC or z990 to any z10 EC. Upgrade
of z10 EC Models E12, E26, E40 and E56 to the E64 is
disruptive. When upgrading to z10 EC Model E64, unlike
the z9 EC, the first book is retained. There are no direct
E12 E26 E40 E54 E64
upgrades from the z9 BC or IBM eServer zSeries 900
(z900), or previous generation IBM eServer zSeries.
z10 EC Base and Subcapacity Offerings
IBM is increasing the number of sub-capacity engines on
• The z10 EC has 36 additional capacity settings at the
the z10 EC. A total of 36 sub-capacity settings are avail- low end
able on any hardware model for 1-12 CPs. Models with 13 • Available on ANY H/W Model for 1 to 12 CPs. Models
CPs or greater must be full capacity. with 13 CPs or greater have to be full capacity
• All CPs must be the same capacity within the z10 EC
For the z10 EC models with 1-12 CPs, there are four • All specialty engines run at full capacity. The one for one
capacity settings per engine for central processors (CPs). entitlement to purchase one zAAP or one zIIP for each
CP purchased is the same for CPs of any capacity.
The entry point (Model 401) is approximately 23.69% of
• Only 12 CPs can have granular capacity, other PUs
a full speed CP (Model 701). All specialty engines con-
must be CBU or characterized as specialty engines
tinue to run at full speed. Sub-capacity processors have
availability of z10 EC features/functions and any-to-any
upgradeability is available within the sub-capacity matrix.
All CPs must be the same capacity setting size within one
z10 EC.

13
z10 EC Performance

The performance design of the z/Architecture can enable LSPR workloads have been updated to reflect more
the server to support a new standard of performance for closely your current and growth workloads. The classifica-
applications through expanding upon a balanced system tion Java Batch (CB-J) has been replaced with a new clas-
approach. As CMOS technology has been enhanced to sification for Java Batch called ODE-B. The remainder of
support not only additional processing power, but also the LSPR workloads are the same as those used for the z9
more PUs, the entire server is modified to support the EC LSPR. The typical LPAR configuration table is used to
increase in processing power. The I/O subsystem supports establish single-number-metrics such as MIPS and MSUs.
a greater amount of bandwidth than previous generations The z10 EC LSPR will rate all z/Architecture processors
through internal changes, providing for larger and faster running in LPAR mode, 64-bit mode, and assumes that
volume of data movement into and out of the server. Sup- HiperDispatch is enabled.
port of larger amounts of data within the server required
For more detailed performance information, consult the
improved management of storage configurations, made
Large Systems Performance Reference (LSPR) available
available through integration of the operating system and
at: http://www.ibm.com/servers/eserver/zseries/lspr/.
hardware support of 64-bit addressing. The combined bal-
anced system design allows for increases in performance
across a broad spectrum of work.

Large System Performance Reference


IBM’s Large Systems Performance Reference (LSPR)
method is designed to provide comprehensive
z/Architecture processor capacity ratios for different con-
figurations of Central Processors (CPs) across a wide
variety of system control programs and workload environ-
ments. For z10 EC, z/Architecture processor capacity
indicator is defined with a (7XX) notation, where XX is the
number of installed CPs.

Based on using an LSPR mixed workload, the perfor-


mance of the z10 EC (2097) 701 is expected to be up to
1.62 times that of the z9 EC (2094) 701.

The LSPR contains the Internal Throughput Rate Ratios


(ITRRs) for the new z10 EC and the previous-generation
zSeries processor families based upon measurements
and projections using standard IBM benchmarks in a con-
trolled environment. The actual throughput that any user
may experience will vary depending upon considerations
such as the amount of multiprogramming in the user’s job
stream, the I/O configuration, and the workload processed.

14
z10 EC I/O Subsystem z10 EC Channels and I/O Connectivity

The z10 EC contains an I/O subsystem infrastructure ESCON Channels


which uses an I/O cage that provides 28 I/O slots and the The z10 EC supports up to 1,024 ESCON channels. The
ability to have one to three I/O cages delivering a total of high density ESCON feature has 16 ports, 15 of which
84 I/O slots. ESCON, FICON Express4, FICON Express2, can be activated for customer use. One port is always
FICON Express, OSA-Express3 LR, OSA-Express2, and reserved as a spare which is activated in the event of a
Crypto Express2 features plug into the z10 EC I/O cage failure of one of the other ports. For high availability the
along with any ISC-3s and InfiniBand Multiplexer (IFB- initial order of ESCON features will deliver two 16-port
MP) cards. All I/O features and their support cards can ESCON features and the active ports will be distributed
be hot-plugged in the I/O cage. Installation of an I/O across those features.
cage remains a disruptive MES, so the Plan Ahead fea-
ture remains an important consideration when ordering a
Fibre Channel Connectivity
z10 EC system. Each model ships with one I/O cage as
The on demand operating environment requires fast data
standard in the A-Frame (the A-Frame also contains the
access, continuous data availability, and improved flexibil-
Central Electronic Complex [CEC] cage where the books
ity, all with a lower cost of ownership. The four port FICON
reside) and any additional I/O cages are installed in the
Express4 and FICON Express2 features available on the
Z-Frame. Each IFB-MP has a bandwidth up to 6 GigaBytes
z9 EC continue to be supported on the System z10 EC.
per second (GB/sec) for I/O domains and MBA fanout
cards provide 2.0 GB/sec for ICB-4s.
FICON Express4 Channels
The z10 EC continues to support all of the features
The z10 EC supports up to 336 FICON Express4 chan-
announced with the System z9 EC such as:
nels, each one operating at 1, 2 or 4 Gb/sec auto-negoti-
• Logical Channel Subsystems (LCSSs) and support for ated. The FICON Express4 features are available in long
up to 60 logical partitions wavelength (LX) and short wavelength (SX). For customers
• Increased number of Subchannels (63.75k) exploiting LX, there are two options available for unre-
• Multiple Subchannel Sets (MSS) peated distances of up to 4 kilometers (2.5 miles) or up to
• Redundant I/O Interconnect 10 kilometers (6.2 miles). Both LX features use 9 micron

• Physical Channel IDs (PCHIDs) single mode fiber optic cables. The SX feature uses 50

• System Initiated CHPID Reconfiguration or 62.5 micron multimode fiber optic cables. Each FICON

• Logical Channel SubSystem (LCSS) Spanning Express4 feature has 4 independent channels (ports) and
can be configured to carry native FICON traffic or Fibre
Channel (SCSI) traffic. LX and SX cannot be intermixed on
a single feature. The receiving devices must correspond to
the appropriate LX or SX feature. The maximum number of
FICON Express4 features is 84 using three I/O cages.

15
FICON Express2 Channels FICON Express4 and FICON Express2 Performance
The z10 EC supports carrying forward up to 336 FICON Your enterprise may benefit from FICON Express4 and
Express2 channels, each one operating at 1 or 2 Gb/sec FICON Express2 with:
auto-negotiated. The FICON Express2 features are avail-
• Increased data transfer rates (bandwidth)
able in long wavelength (LX) using 9 micron single mode
• Improved performance
fiber optic cables and short wavelength (SX) using 50 and
• Increased number of start I/Os
62.5 micron multimode fiber optic cables. Each FICON
• Reduced backup windows
Express2 feature has four independent channels (ports)
• Channel aggregation to help reduce infrastructure costs
and each can be configured to carry native FICON traffic
or Fibre Channel (SCSI) traffic. LX and SX cannot be inter- For more information about FICON, visit the IBM Redbooks®
mixed on a single feature. The maximum number of FICON Web site at: http://www.redbooks.ibm.com/ search for
Express2 features is 84, using three I/O cages. SG24-5444. There are also various FICON I/O Connectivity
information at: www-03.ibm.com/systems/z/connectivity/.
FICON Express Channels
The z10 EC also supports carrying forward FICON Express Extended distance FICON – improved performance at extended
LX and SX channels from z9 EC and z990 (up to 120 chan- distance
nels) each channel operating at 1 or 2 Gb/sec auto-negoti- An enhancement to the industry standard FICON architec-
ated. Each FICON Express feature has two independent ture (FC-SB-3) helps avoid degradation of performance at
channels (ports). extended distances by implementing a new protocol for
“persistent” Information Unit (IU) pacing. Control units that
The System z10 EC Model E12 is limited to 64 features
exploit the enhancement to the architecture can increase
– any combination of FICON Express4, FICON Express2
the pacing count (the number of IUs allowed to be in flight
and FICON Express LX and SX features.
from channel to control unit). Extended Distance FICON also
The FICON Express4, FICON Express2 and FICON Ex- allows the channel to “remember” the last pacing update
press feature conforms to the Fibre Connection (FICON) for use on subsequent operations to help avoid degrada-
architecture and the Fibre Channel (FC) architecture, tion of performance at the start of each new operation.
providing connectivity between any combination of serv-
Improved IU pacing can help to optimize the utilization of
ers, directors, switches, and devices in a Storage Area
the link (for example – help keep a 4 Gbps link fully utilized
Network (SAN). Each of the four independent channels
at 50 km) and provide increased distance between servers
(FICON Express only supports two channels per feature) is
and control units.
capable of 1 gigabit per second (Gb/sec), 2 Gb/sec, or 4
Gb/sec (only FICON Express4 supports 4 Gbps) depend- The requirements for channel extension equipment are
ing upon the capability of the attached switch or device. simplified with the increased number of commands in
The link speed is auto-negotiated, point-to-point, and is flight. This may benefit z/OS Global Mirror (Extended
transparent to users and applications. Not all switches and Remote Copy – XRC) applications as the channel exten-
devices support 2 or 4 Gb/sec link data rates. sion kit is no longer required to simulate (or spoof) specific
channel commands. Simplifying the channel extension
requirements may help reduce the total cost of ownership
of end-to-end solutions.

16
Extended distance FICON is transparent to operating sys- Native FICON Channels
tems and applies to all the FICON Express4 and FICON Native FICON channels and devices can help to reduce
Express2 features carrying native FICON traffic (CHPID bandwidth constraints and channel contention to enable
type FC). For exploitation, the control unit must support the easier server consolidation, new application growth,
new IU pacing protocol. large business intelligence queries and exploitation of On
Demand Business.
The channel will default to current pacing values when
operating with control units which cannot exploit extended The FICON Express4, FICON Express2 and FICON
distance FICON. Express channels support native FICON and FICON
Channel-to-Channel (CTC) traffic for attachment to serv-
Concurrent Update ers, disks, tapes, and printers that comply with the FICON

The FICON Express4 SX and LX features may be added architecture. Native FICON is supported by all of the

to an existing z10 EC concurrently. This concurrent update z10 EC operating systems. Native FICON and FICON

capability allows you to continue to run workloads through CTC are defined as CHPID type FC.

other channels while the new FICON Express4 features are


Because the FICON CTC function is included as part of
being added. This applies to CHPID types FC and FCP.
the native FICON (FC) mode of operation, FICON CTC is
not limited to intersystem connectivity (as is the case with
Continued Support of Spanned Channels and Logical ESCON), but will support multiple device definitions.
Partitions
The FICON Express4 and FICON Express2, FICON and FICON Support for Cascaded Directors
FCP (CHPID types FC and FCP) channel types, can be
Native FICON (FC) channels support cascaded directors.
defined as a spanned channel and can be shared among
This support is for a single hop configuration only. Two-
logical partitions within and across LCSSs.
director cascading requires a single vendor high integrity
fabric. Directors must be from the same vendor since cas-
Modes of Operation caded architecture implementations can be unique. This
There are two modes of operation supported by FICON type of cascaded support is important for disaster recov-
Express4 and FICON Express2 SX and LX. These modes ery and business continuity solutions because it can help
are configured on a channel-by-channel basis – each of provide high availability, extended distance connectivity,
the four channels can be configured in either of two sup- and (particularly with the implementation of 2 Gb/sec Inter
ported modes. Switch Links) has the potential for fiber infrastructure cost
savings by reducing the number of channels for intercon-
• Fibre Channel (CHPID type FC), which is native FICON
necting the two sites.
or FICON Channel-to-Channel (server-to-server)
• Fibre Channel Protocol (CHPID type FCP), which sup-
ports attachment to SCSI devices via Fibre Channel
switches or directors in z/VM, z/VSE, and Linux on
System z10 environments

17
FICON cascaded directors have the added value of high FCP – increased performance
integrity connectivity. New integrity features introduced The Fibre Channel Protocol (FCP) Licensed Internal
within the FICON Express channel and the FICON cas- Code has been modified to help provide increased I/O
caded switch fabric to aid in the detection and reporting operations per second for small block sizes. With FICON
of any miscabling actions occurring within the fabric can Express4, there may be up to 52,000 I/O operations per
prevent data from being delivered to the wrong end point. second (all reads, all writes, or a mix of reads and writes),
a 60% increase compared to System z9. These results are
achieved in a laboratory environment using one channel
configured as CHPID type FCP with no other processing
occurring and do not represent actual field measurements.

IBM
A significant increase in I/O operations per second for small
block sizes can also be expected with FICON Express2.
Two site non-cascaded director
topology. Each CEC connects to
directors in both sites.
This FCP performance improvement is transparent to oper-
ating systems and applies to all the FICON Express4 and
With Inter Switch Links (ISLs),
IBM

less fiber cabling may be needed


for cross-site connectivity FICON Express2 features when configured as CHPID type
Two Site cascaded director
topology. Each CEC connects to FCP, communicating with SCSI devices.
local directors only.

FCP Full fabric connectivity


FCP Channels
FCP full fabric support means that any number of (single
z10 EC supports FCP channels, switches and FCP/ SCSI
vendor) FCP directors/ switches can be placed between
disks with full fabric connectivity under Linux on System z
the server and an FCP/SCSI device, thereby allowing
and z/VM 5.2 (or later) for Linux as a guest under z/VM,
many “hops” through a Storage Area Network (SAN) for I/O
under z/VM 5.2 (or later), and under z/VSE 3.1 for system
connectivity. FCP full fabric connectivity enables multiple
usage including install and IPL. Support for FCP devices
FCP switches/directors on a fabric to share links and there-
means that z10 EC servers are capable of attaching to select
fore provides improved utilization of inter-site connected
FCP-attached SCSI devices and may access these devices
resources and infrastructure.
from Linux on z10 EC and z/VSE. This expanded attachability
means that enterprises have more choices for new storage
FICON Express enhancements for Storage Area Networks
solutions, or may have the ability to use existing storage
devices, thus leveraging existing investments and lowering
N_Port ID Virtualization
total cost of ownership for their Linux implementations.
N_Port ID Virtualization is designed to allow for sharing of
The same FICON features used for native FICON chan- a single physical FCP channel among multiple operating
nels can be defined to be used for Fibre Channel Protocol system images. Virtualization function is currently available
(FCP) channels. FCP channels are defined as CHPID type for ESCON and FICON channels, and is now available for
FCP. The 4 Gb/sec capability on the FICON Express4 FCP channels. This new function offers improved FCP chan-
channel means that 4 Gb/sec link data rates are available nel utilization due to fewer hardware requirements, and can
for FCP channels as well. reduce the complexity of physical FCP I/O connectivity.

18
Program Directed re-IPL to Local Area Networks (LANs), and supports 1000BASE-T
Program Directed re-IPL is designed to enable an operat- Ethernet, Gigabit Ethernet (GbE) LX and SX, and 10 GbE
ing system to determine how and from where it had been LR. When OSA-Express3 10 GbE LR becomes available,
loaded. Further, Program Directed re-IPL may then request OSA-Express2 10 GbE LR will no longer be available for
that it be reloaded again from the same load device using ordering.
the same load parameters. In this way, Program Directed re-
The OSA-Express3 and OSA-Express2 features are
IPL allows a program running natively in a partition to trigger
hot-pluggable, support the Multiple Image Facility (MIF)
a re-IPL. This re-IPL is supported for both SCSI and ECKD™
sharing of channels across logical partitions, and can be
devices. z/VM 5.3 provides support for guest exploitation.
defined as a spanned channel to be shared among logical
partitions within and across LCSSs. The maximum com-
FICON Link Incident Reporting bined number of OSA-Express3 and OSA-Express2 fea-
FICON Link Incident Reporting is designed to allow an tures supported per server is 24 on the z10 EC (up to 48
operating system image (without operating intervention) ports). OSA-Express2 features can be carried forward on
to register for link incident reports, which can improve the an upgrade from a z9 EC, z990 or z900 server. The OSA-
ability to capture data for link error analysis. The informa- Express features are not supported on z10 EC servers.
tion can be displayed and is saved in the system log.
The OSA-Express2 1000BASE-T Ethernet feature and the
OSA-Express2 Gigabit Ethernet (GbE) feature support
Serviceability Enhancements
the IBM Communication Controller for Linux (CCL) on the
Requests Node Identification Data (RNID) is designed to System z platform. The OSA-Express2 OSN (OSA for NCP)
facilitate the resolution of fiber optic cabling problems. You supports the Channel Data Link Control (CDLC) protocol,
can now request RNID data for a device attached to a na- which provides direct access from the host operating sys-
tive FICON channel. tem (such as z/OS and TPF) to the CCL.

With the large volume and complexity of today’s network


Connectivity for LANs – Open Systems Adapters
traffic, the z10 EC offers systems programmers and
Networking enhancements for the OSA-Express family
network administrators the ability to more easily solve net-
of features are designed to facilitate serviceability, help
work problems. With the introduction of the OSA-Express
simplify the infrastructure, facilitate load balancing, reduce
Network Traffic Analyzer and QDIO Diagnostic Synchro-
latency, improve performance, and allow ports to be com-
nization on the System z and available on the z10 EC,
bined in a single logical link for increased throughput and
customers will have the ability to capture trace/trap data
nondisruptive failover.
and forward it to z/OS 1.8 tools for easier problem determi-
Local Area Network (LAN) connectivity for the z10 EC is nation and resolution.
being enhanced with the introduction of a dual port Open
This function is designed to allow the operating system
Systems Adapter-Express3 (OSA-Express3) 10 Gbps Long
to control the sniffer trace for the LAN and capture the
Reach. Open Systems Adapter-Express2 (OSA-Express2),
records into host memory and storage (file systems), using
continues to be supported on the z10 EC for connectivity
existing host operating system tools to format, edit, and
process the sniffer records.

19
OSA-Express Network Traffic Analyzer is exclusive to the logical link for increased throughput and for nondisruptive
z10 EC, z9 EC and z9 BC, and is applicable to the OSA- failover in the event that a port becomes unavailable.
Express3 and OSA-Express2 features when configured as
• Aggregated link viewed as one logical trunk and con-
CHPID type OSD (QDIO), and is supported by z/OS.
taining all of the Virtual LANs (VLANs) required by the
LAN segment
Dynamic LAN idle for z/OS • Load balance communications across several links in a
Dynamic LAN idle is designed to reduce latency and trunk to prevent a single link from being overrun
improve network performance by dynamically adjusting • Link aggregation between a VSWITCH and the physical
the inbound blocking algorithm. When enabled, the z/OS network switch
TCP/IP stack is designed to adjust the inbound blocking • Point-to-point connections
algorithm to best match the application requirements. • Up to eight OSA-Express3 or OSA-2 ports in one aggre-
gated link
For latency sensitive applications, the blocking algorithm is
• Ability to dynamically add/remove OSA ports for “on
modified to be “latency sensitive.” For streaming (through-
demand” bandwidth
put sensitive) applications, the blocking algorithm is ad-
• Full-duplex mode (send and receive)
justed to maximize throughput. The z/OS TCP/IP stack can
• Target links for aggregation must be of the same type
dynamically detect the application requirements, making
(for example, Gigabit Ethernet to Gigabit Ethernet)
the necessary adjustments to the blocking algorithm. The
monitoring of the application and the blocking algorithm The Open Systems Adapter/Support Facility (OSA/SF) will
adjustments are made in real-time, dynamically adjusting provide status information on an OSA port – its “shared” or
the application’s LAN performance. “exclusive use” state. OSA/SF is an integrated component
of z/VM.
System administrators can authorize the z/OS TCP/IP stack
to enable a dynamic setting, which was previously a static Link aggregation is exclusive to z10 EC, z9 EC and z9 BC,
setting. The z/OS TCP/IP stack is able to help determine is applicable to the OSA-Express3 and OSA-Express2
the best setting for the current running application, based features in Layer 2 mode when configured as CHPID type
on system configuration, inbound workload volume, CPU OSD (QDIO), and is supported by z/VM.
utilization, and traffic patterns.
OSA Layer 3 Virtual MAC for z/OS
Link aggregation for z/VM in Layer 2 mode To simplify the infrastructure and to facilitate load balanc-
z/VM Virtual Switch-controlled (VSWITCH-controlled) link ing when an LPAR is sharing the same OSA Media Access
aggregation (IEEE 802.3ad) allows you to dedicate an Control (MAC) address with another LPAR, each operating
OSA-Express2 (or OSA-Express3) port to the z/VM operat- system instance can now have its own unique “logical” or
ing system when the port is participating in an aggregated “virtual” MAC (VMAC) address. All IP addresses associ-
group when configured in Layer 2 mode. Link aggregation ated with a TCP/IP stack are accessible using their own
(trunking) is designed to allow you to combine multiple VMAC address, instead of sharing the MAC address of
physical OSA-Express3 and OSA-Express2 ports (of the an OSA port. This applies to Layer 3 mode and to an OSA
same type for example 1GbE or 10GbE) into a single port shared among Logical Channel Subsystems.

20
This support is designed to: The OSA-Express3 and OSA-Express2 Ethernet features
support the following CHPID types:
• Improve IP workload balancing
• Dedicate a Layer 3 VMAC to a single TCP/IP stack CHPID OSA-Express3/ Purpose / Traffic
Type OSA-Express2
• Remove the dependency on Generic Routing Encapsu-
Features
lation (GRE) tunnels
OSC 1000BASE-T TN3270E, non-SNA DFT,
• Improve outbound routing IPL CECs and logical partitions
Operating system console operations
• Simplify configuration setup
OSD 1000BASE-T QDIO,
• Allow WebSphere Application Server content-based
GbE 10 TCP/IP traffic when Layer 3,
routing to work with z/OS in an IPv6 network GbE Protocol-independent when Layer 2
• Allow z/OS to use a “standard” interface ID for IPv6 OSE 1000BASE-T Non-QDIO, SNA/APPN®/HPR and/or
addresses TCP/IP

• Remove the need for PRIROUTER/SECROUTER function OSN 1000BASE-T OSA for NCP providing support for IBM
GbE Communication Controller for Linux (CCL)
in z/OS

Introducing OSA-Express3 10 GbE LR – designed to deliver


VMACs are currently available for Layer 2 mode in the
increased throughput
z/VM and Linux on System z10 EC and System z9 environ-
ments. OSA Layer 3 VMAC is exclusive to z10 EC, z9 EC Planned to be available second quarter 2008*, OSA-

and z9 BC, is applicable to the OSA-Express3 LR, and Express3 10 Gigabit Ethernet (GbE) has been designed to

OSA-Express2 features when configured as CHPID type increase the throughput for standard frames (1492 byte)

OSD (QDIO), and is supported by z/OS (and z/VM for z/OS and jumbo frames (8992 byte) compared to OSA-Express2

guest exploitation). 10 GbE to help satisfy the bandwidth requirements of


your applications. This increase in performance has been
achieved; an enhancement to the architecture supports
OSA-Express3 and OSA-Express2 Ethernet features on z10 EC
direct host memory access by using a data router, elimi-
The OSA-Express3 and OSA-Express2 features provide
nating “store and forward” delays.
you with the function and scalability required to help satisfy
the demands of your global businesses. With data rates When OSA-Express3 10 GbE LR becomes available, OSA-
of 10 or 100 Megabits per second (Mb/sec), 1 Gigabit Express2 10 GbE LR will no longer be available for ordering.
per second (Gb/ sec), and 10 Gb/sec, you can select the
The 10 GbE feature does not support auto-negotiation to
features that best suit your current and your future applica-
any other speed; it supports 64B/66B coding, whereas
tion requirements.
GbE supports 8B/10B coding. Therefore, auto-negotiation
• OSA-Express3 10 Gigabit Ethernet LR to any other speed is not possible.
• OSA-Express2 Gigabit Ethernet LX
The OSA-Express3 10 Gigabits per second (Gbps) link
• OSA-Express2 Gigabit Ethernet SX
data rate does not represent the actual throughput of the
• OSA-Express2 1000BASE-T Ethernet
OSA-Express3 10 GbE LR feature. Actual throughput is
• OSA-Express2 10 Gigabit Ethernet LR
dependent upon many factors, including traffic direction,

21
the pattern of acknowledgement traffic, packet size, the The OSA-Express2 1000BASE-T Ethernet feature supports
application, TCP/IP, the network, the disk subsystem, and the following modes of operation:
the number of clients being served.
• OSA-ICC (CHPID type OSC), for 3270 data streams
The OSA-Express3 10 GbE has been designed with two • QDIO (CHPID type OSD), for TCP/IP traffic when Layer
PCI adapters, each with one port. Doubling the port density 3, and for protocol-independent when Layer 2
on a single feature helps to reduce the number of I/O slots • Non-QDIO (CHPID type OSE), for TCP/IP and/or SNA/
required for high speed connectivity to the Local Area Net- APPN/HPR traffic
work (LAN). Each port continues to be defined as CHPID • OSA for NCP (CHPID type OSN), to provide channel
type OSD, supporting the Queued Direct Input/Output connectivity between operating systems and CCL
(QDIO) architecture for high speed TCP/IP communication.
The OSA-Express2 1000BASE-T Ethernet feature is a dual-
OSA-Express3 10 GbE LR is exclusive to z10 EC and sup-
port feature occupying a single I/O slot and utilizes one
ports CHPID type OSD. It is supported by z/OS, z/VM,
CHPID per port; two CHPIDs per feature. Each port can
z/VSE, z/TPF, and Linux on System z.
be independently configured as CHPID type OSC, OSD,
OSE, or OSN. The OSA-Express2 1000BASE-T Ethernet
The OSA-Express2 1000BASE-T Ethernet feature is offered on new builds while the OSA-Express
IBM System z10 EC continues to support the expanded 1000BASE-T Ethernet feature can be carried forward on an
family of OSA-Express2 features which include 1000BASE-T upgrade from a System z9, z990 or z900 server.
Ethernet, supporting a link data rate of 10, 100, or 1000
Mb/sec over a copper infrastructure. The OSA-Express2
OSA-Express2 Gigabit Ethernet
1000BASE-T Ethernet feature continues to provide support
The third generation of Gigabit Ethernet features is
for:
designed to support line speed – 1 Gb/sec in each
• OSA-Integrated Console Controller (OSA-ICC) direction or 2 Gb/sec full duplex and support the following
– TN3270E and non-SNA DFT 3270 emulation functions:
• Queued Direct Input/Output (QDIO), CHPID type OSD,
• QDIO architecture
for TCP/IP traffic when using Layer 3, and protocol-inde-
• Layer 2
pendent packet forwarding when using Layer 2 (z/VM
and Linux on System z10 EC and System z9) • Spanned channels

• Non-QDIO, CHPID type OSE, for SNA/APPN/HPR and/or • SNMP


TCP/IP traffic • IPv4 and IPv6
• Checksum Offload (exclusive to QDIO mode, CHPID • 640 TCP/IP stacks per CHPID
type OSD) • Jumbo frames (8992 byte frame size)
• Spanned channels and sharing among logical partitions • Large send, for TCP/IP traffic and CPU efficiency,
• Jumbo frames in QDIO mode (when operating at 1 Gb/ offloading the TCP segmentation processing from the
sec) host TCP/IP stack
• Auto-negotiation (the target device must also be set to • Concurrent LIC update
auto-negotiate) • OSA-Express2 OSN (OSA for NCP)
• Category 5 Unshielded Twisted Pair (UTP) cabling

22
OSA-Express2 Gigabit Ethernet (GbE) operates in QDIO The 10 Gigabit Ethernet (10 GbE) feature does not support
mode only and supports full duplex operation, and jumbo auto-negotiation to any other speed. The 10 GbE feature
frames (8992 byte frame size). supports 64B/66B coding, whereas the GbE supports 8B/
10B coding.
The OSA-Express2 GbE features continue to be dual-port
features occupying a single I/O slot and utilize one CHPID The OSA-Express2 10 Gigabits per second (Gb/sec) link
per port; two CHPIDs per feature. Each port can be indepen- data rate does not represent the actual throughput of
dently configured as CHPID type OSD or OSN. The OSA- the OSA-Express2 10 GbE feature. Actual throughput is
Express2 Gigabit Ethernet SX and LX features are offered on dependent upon many factors, including traffic direction,
new builds while the OSA-Express Gigabit Ethernet features the pattern of acknowledgment traffic, packet size, the
can be carried forward on an upgrade from a z990 server. application, TCP/IP, the network, disk subsystem, and the
number of clients being served.
The OSA-Express2 GbE features are supported on the
z10 EC, z9 EC, z9 BC, z990 and z890. The OSA-Express2 10 GbE feature is supported on the
z10 EC, z9 EC, z9 BC, z990 and z890.

OSA-Express2 10 Gigabit Ethernet LR


The OSA-Express2 10 Gigabit Ethernet Long Reach (LR) IBM Communication Controller for Linux (CCL)
can be used in an enterprise backbone, between campuses, CCL is designed to help eliminate hardware dependen-
to consolidate file servers and to connect server farms with cies, such as 3745/3746 Communication Controllers,
z10 EC, z9 EC, z9 BC, z990, and z890 servers. ESCON channels, and Token-Ring LANs, by providing a
software solution that allows the Network Control Program
The OSA Express2 10 GbE LR supports:
(NCP) to be run in Linux on z10 EC freeing up valuable
• Queued Direct Input/Output (QDIO) data center floor space.
• One port per feature
CCL helps preserve mission critical SNA functions, such
• A link data rate of 10 Gb/sec
as SNI, and z/OS applications workloads which depend
• Full duplex mode
upon these functions, allowing you to collapse SNA inside
• Spanned channels
a z10 EC while exploiting and leveraging IP.
• SNMP
• IPv4 and IPv6 The OSA-Express2 GbE and 1000BASE-T Ethernet
• Jumbo frames (8992 bytes frame size) features provide support for CCL with OSA-Express2

• Checksum Offload for IPv4 packets OSN (Open Systems Adapter for NCP). This support is

• Layer 2 support designed to require no changes to operating systems

• Large send (does require a PTF to support CHPID type OSN) and also
allows TPF to exploit CCL. Supported by z/VM for Linux
• 640 TCP/IP stacks
and z/TPF guest environments
• Concurrent LIC update
• SC Duplex connector
• Single mode fiber (9 micron)
• An unrepeated distance of 10 km (6.2 miles)

23
OSA-Express2 OSN (OSA for NCP) OSA Integrated Console Controller
The OSA-Express2 OSN (OSA for NCP) can help to elimi- The Open Systems Adapter Integrated Console Control-
nate the requirement to have any form of external medium, ler function (OSA-ICC), which is exclusive to the System
and all related hardware, for communications between the z10 EC, System z9 and z990 servers since it is based on
host operating system and the CCL image. Traffic between the OSA-Express2 and OSA-Express 1000BASE-T Ether-
the two images (operating system and CCL) is no longer net features, supports the attachment of non-SNA 3270
required to flow on an external Local Area Network (LAN) terminals for operator console applications. Now, 3270
or ESCON channel. emulation for console session connections (TN3270E [RFC
2355] or non-SNA DFT 3270 emulation) is integrated in the
CHPID type OSN supports both SNA PU Type 5 and PU
System z platforms which can help eliminate the require-
Type 2.1 channel connectivity.
ment for external console controllers (2074, 3174), helping
Utilizing existing SNA support (multiple transmission to reduce cost and complexity.
groups), OSA-Express2 OSN support permits multiple
The OSA-ICC can be individually configured on a port-
connections between the same CCL image and the same
by-port basis. The OSA-ICC is enabled using CHPID type
host operating system image. It also allows multiple CCL
OSC. The OSA-ICC supports up to 120 client console ses-
images to communicate with multiple operating system
sions per port either locally or remotely.
images, supporting up to 180 connections (3745/3746
unit addresses) per CHPID type OSN. CHPID type OSN Support for this function is provided with z/OS, z/VM,
can also span LCSSs. The CCL image connects to the z/VSE, and TPF.
OSA-Express2 feature using QDIO architecture and uses
the Linux QDIO (qeth) support updated to support OSN OSA Enhancements
device types.
Remove L2/L3 LPAR-to-LPAR Restriction
OSA-Express2 OSN (OSA for NCP) support is exclusive to
OSA port sharing between virtual switches can communi-
System z10 EC and System z9.
cate whether the transport mode is the same (Layer 2 to
Layer 2) or different (Layer 2 to Layer 3). This enhance-
OSA-Express2 concurrent LIC update – an availability
ment is designed to allow seamless mixing of Layer 2 and
enhancement
Layer 3 traffic, helping to reduce the total cost of network-
The OSA-Express2 features have increased memory
ing. Previously, Layer 2 and Layer 3 TCP/IP connections
in comparison to the OSA-Express features and are
through the same OSA port (CHPID) were unable to com-
designed to be able to facilitate concurrent application of
municate with each other LPAR-to-LPAR using the Multiple
Licensed Internal Code (LIC) updates, allowing the appli-
Image Facility (MIF).
cation of LIC updates without requiring a configuration
off/on of the features. This can help minimize the disruption This enhancement is designed to facilitate a migration
to network traffic during the update. from Layer 3 to Layer 2 and to continue to allow LAN
administrators to configure and manage their mainframe
OSA-Express2 concurrent LIC update applies to CHPID
network topology using the same techniques as their non-
type OSD and is exclusive to the System z10 EC, System
mainframe topology.
z9, and z990.

24
HiperSockets

OSA/SF Virtual MAC and VLAN id Display Capability The HiperSockets function, also known as internal Queued
The Open Systems Adapter/Support Facility (OSA/SF) has Direct Input/Output (iDQIO) or internal QDIO, is an inte-
the capability to support virtual Medium Access Control grated function of the z10 EC server that provides users
(MAC) and Virtual Local Area Network (VLAN) identifica- with attachments to up to sixteen high-speed “virtual”
tions (IDs) associated with OSA-Express2 feature config- Local Area Networks (LANs) with minimal system and
ured as a Layer 2 interface. This information will now be network overhead. HiperSockets eliminates the need to
displayed as a part of an OSA Address Table (OAT) entry. utilize I/O subsystem operations and the need to traverse
This information is independent of IPv4 and IPv6 formats. an external network connection to communicate between
There can be multiple Layer 2 VLAN Ids associated to a logical partitions in the same z10 EC server.
single unit address. One group MAC can be associated to
Now, the HiperSockets internal networks on z10 EC can
multiple unit addresses.
support two transport modes: Layer 2 (Link Layer) as well
For additional information, view IBM Redbooks, IBM as the current Layer 3 (Network or IP Layer). Traffic can
System z Connectivity Handbook (SG24-5444) at: be Internet Protocol (IP) version 4 or version 6 (IPv4, IPv6)
www.redbooks.ibm.com/. or non-IP (AppleTalk, DECnet, IPX, NetBIOS, or SNA).
HiperSockets devices are now protocol-independent and
Layer 3 independent. Each HiperSockets device has its
own Layer 2 Media Access Control (MAC) address, which
is designed to allow the use of applications that depend
on the existence of Layer 2 addresses such as DHCP
servers and firewalls.

Layer 2 support can help facilitate server consolidation.


Complexity can be reduced, network configuration is
simplified and intuitive, and LAN administrators can con-
figure and maintain the mainframe environment the same
as they do a non-mainframe environment. With support
of the new Layer 2 interface by HiperSockets, packet
forwarding decisions are now based upon Layer 2 infor-
mation, instead of Layer 3 information. The HiperSockets
device performs automatic MAC address generation and
assignment to allow uniqueness within and across logical
partitions (LPs) and servers. MAC addresses can also be
locally administered. The use of Group MAC addresses
for multicast is supported as well as broadcasts to all
other Layer 2 devices on the same HiperSockets network.

25
Security

Datagrams are only delivered between HiperSockets Protecting sensitive data is a growing concern for compa-
devices that are using the same transport mode (Layer 2 nies around the globe. The importance of securing critical
with Layer 2 and Layer 3 with Layer 3). A Layer 2 device business data and customer information reaches to the
cannot communicate directly with a Layer 3 device in corporate boardroom, because failure to protect these
another LP. assets may result in high out-of-pocket costs and, more
importantly, may also result in lost customer and investor
A HiperSockets device can filter inbound datagrams by
confidence. Data protection may also be required by strin-
Virtual Local Area Network identification (VLAN ID, IEEE
gent government regulations and contractual obligations
802.1q), the Ethernet destination MAC address, or both.
with business partners. Whether the data moves across
Filtering can help reduce the amount of inbound traffic
the network or across town on a tape in a truck, the object
being processed by the operating system, helping to
is to make it usable to those who are authorized and inac-
reduce CPU utilization.
cessible to those who are not.
Analogous to the respective Layer 3 functions, HiperSockets
With IBM Encryption Facility for z/OS software and Inte-
Layer 2 devices can be configured as primary or secondary
grated Cryptographic Service Facility (ICSF) and with
connectors or multicast routers. This is designed to enable
Encryption Facility for z/VSE, IBM offers solutions for
the creation of high performance and high availability
encrypting data at rest that exploits the existing strengths
Link Layer switches between the internal HiperSockets
of the mainframe. The Encryption Facility for z/OS and En-
network and an external Ethernet or to connect the
cryption Facility for z/VSE software allows you to exchange
HiperSockets Layer 2 networks of different servers. The
encrypted tapes across the enterprise and with partners
new HiperSockets Multiple Write Facility for z10 EC is also
even if the recipient does not have access to IBM software.
supported for Layer 2 HiperSockets devices, thus allowing
performance improvements for large Layer 2 datastreams.

HiperSockets Layer 2 support is exclusive to z10 EC, and


is supported by z/OS, Linux on System z environments,
and z/VM for Linux guest exploitation.

HiperSockets Multiple Write Facility for increased performance


HiperSockets performance has been enhanced to allow
for the streaming of bulk data over a HiperSockets link
between logical partitions (LPs). The receiving LP can now
process a much larger amount of data per I/O interrupt.
This enhancement is transparent to the operating system
in the receiving LPAR. HiperSockets Multiple Write Facility
is designed to reduce CPU utilization of the sending LPAR.

HiperSockets Multiple Write Facility on the z10 EC requires


at a minimum:

• z/OS 1.9 with PTFs (Second quarter, 2008*)

26
Cryptography

The z10 EC includes both standard cryptographic hard- Enhancements to CP Assist for Cryptographic
ware and optional cryptographic features for flexibility and Function (CPACF):
growth capability. IBM has a long history of providing hard- CPACF has been enhanced to include support of the fol-
ware cryptographic solutions, from the development of lowing on CPs and IFLs:
Data Encryption Standard (DES) in the 1970s to delivering
• Advanced Encryption Standard (AES) for 256-bit keys
integrated cryptographic hardware in a server to achieve
• SHA-384 and 512 bit for message digest
the US Government’s highest FIPS 140-2 Level 4 rating for
secure cryptographic hardware. SHA-1 and SHA-512 are shipped enabled and do not
require the enablement feature. Support for CPACF is
The IBM System z10 EC cryptographic functions include
also available using the Integrated Cryptographic Service
the full range of cryptographic operations needed for
Facility (ICSF). ICSF is a component of z/OS, and is
e-business, e-commerce, and financial institution applica-
designed to transparently use the available cryptographic
tions. In addition, custom cryptographic functions can be
functions, whether CPACF or Crypto Express2, to balance
added to the set of functions that the z10 EC offers.
the workload and help address the bandwidth require-
New integrated clear key encryption security features on ments of your applications.
z10 EC include support for a higher advanced encryption
The enhancements to CPACF are exclusive to the System
standard and more secure hashing algorithms. Performing
z10 and supported by z/OS, z/VM, z/VSE and Linux on
these functions in hardware is designed to contribute to
System z.
improved performance.

Enhancements to eliminate preplanning in the cryptogra- A third generation Cryptographic feature – Crypto Express2
phy area include the new System z10 function Dynami-
Today, customers can pre-plan the addition of Crypto
cally Add Crypto to a logical partition. Changes to image
Express2 features to logical partitions (LPs) by using the
profiles, to support Crypto Express2 features, are available
Crypto page in the image profile to define the Cryptographic
without an outage to the logical partition. Crypto Express2
Candidate List, Cryptographic Online List, Usage and Control
features can also be dynamically deleted or moved.
Domain Indexes in advance of Crypto hardware installation.

With the change to Dynamically Add Crypto to Logical


CP Assist for Cryptographic Function (CPACF)
Partition, changes to image profiles, to support Crypto
CPACF supports clear-key encryption. The function is
Express2 features, are available without outage to the
activated using a no-charge enablement feature and offers
logical partition. Customers can also dynamically delete
the following on every CPACF that is shared between two
or move Crypto Express2 features.
CPs or Processor Units (PUs) identified as an Integrated
Facility for Linux (IFL): Pre-planning is no longer required.

• Data Encryption Standard (DES) This enhancement is exclusive to System z10 and is sup-
• Triple Data Encryption Standard (TDES) ported by z/OS.
• Advanced Encryption Standard (AES) for 128-bit keys
• Secure Hash Algorithm, SHA-1 and SHA-256
• Pseudo Random Number Generation (PRNG)

27
The Crypto Express2 feature, with two PCI-X adapters, is – SSL performance equivalent to the PCICA feature
configurable and can be defined for secure key encrypted – The functional enhancements announced in April
transactions (Coprocessor – the default) or SSL accel- 2004, namely: PKE MRP support, PKD zero pad
eration (Accelerator). The PCIXCC, PCICC, and PCICA support, TDES DUKPT, and EMV2000 User Defined
features are not supported on z10 EC. Extension (UDX) Service Offering – programmable to
deploy standard functions and algorithms
The Integrated Cryptographic Service Facility (ICSF), • Up to eight features per server
a component of z/OS, is designed to transparently use – With Crypto Express2, the System z10, System z9, and
the available cryptographic functions, the CP Assist for z990 can have up to sixteen secure key coprocessors
Cryptographic Function (CPACF) as well as the Crypto – With Crypto Express2, the System z10, System z9 and
Express2 features to balance the workload and satisfy the z990 servers can utilize up to sixteen cryptographic
requirements of the applications. coprocessors for clear key SSL acceleration
– A mixture of both secure and clear key applications
The Crypto Express2 feature is designed for Federal Infor-
can run on the same Crypto Express2 feature
mation Processing Standard (FIPS) 140-2 Level 4 Certifica-
– Based on the increased throughput, the ability to con-
tion. A performance benefit is expected with multitasking
solidate both secure key and clear key crypto work-
applications. A performance benefit may not be realized
loads and I/O slots on the same feature
with single-threaded applications, which can utilize only
one of the two coprocessors. All logical partitions in all Logical Channel SubSystems
(LCSSs) have access to the Crypto Express2 feature, up
The Crypto Express2 feature supports the following:
to 60 LPARs per feature. The Crypto Express2 feature oc-
• Consolidation and simplification via a single crypto cupies a card slot but does not use CHPIDs.
coprocessor feature on System z10, System z9, and z990
The Crypto Express2 feature is exclusive to System z10,
• Compute-intensive public key cryptographic functions
System z9 and z990.
designed to help reduce CP utilization and increase
system throughput
• Card Validation Value (CVV) generation and verification Configurable Crypto Express2 feature
services for 19-digit Personal Account Number (PANs) The Crypto Express2 feature has two PCI-X adapters.
• Enabling use of less than 512-bit keys for clear key RSA Each of the PCI-X adapters can be defined as either a
operations Coprocessor or an Accelerator.
• 2048-bit key RSA management capability
• Crypto Express2 Coprocessor – for secure key
• Functions previously supported by the PCICA and encrypted transactions (default) is:
PCIXCC features offered on System z10 include:
– Designed to support security-rich cryptographic func-
– Compute-intensive public key cryptographic func- tions, use of secure encrypted key values, and User
tions to help reduce CP usage and increase system Defined Extensions (UDX)
throughput
– Designed for Federal Information Processing Stan-
– Hardware acceleration for Secure Sockets Layer (SSL) dard (FIPS) 140-2 Level 4 certification
and Transport Layer Security (TLS) protocols to sup-
port secure On Demand Business applications and
transactions

28
• Crypto Express2 Accelerator – for Secure Sockets Layer Smart Card Reader
(SSL) acceleration is: Support for an optional Smart Card Reader attached to
– Designed to support clear key RSA operations the TKE 5.2 workstation allows for the use of smart cards
– Offloads compute-intensive RSA public-key and pri- that contain an embedded microprocessor and associated
vate-key cryptographic operations employed in the memory for data storage. Access to and the use of confi-
SSL protocol
dential data on the smart cards is protected by a user-de-

Crypto Express2 features can be carried forward from fined Personal Identification Number (PIN).

z9 EC to the new System z10, so customers may continue


TKE 5.2 Licensed Internal Code (LIC) has added the ca-
to take advantage of the SSL performance and the con-
pability to store key parts on DVD-RAMs and continues to
figuration capability.
support the ability to store key parts on paper, or optionally

The configurable Crypto Express2 feature is exclusive to on a smart card. TKE 5.2 LIC has limited the use of floppy

the System z10 and System z9, and is supported by z/OS diskettes to read only. The TKE 5.2 LIC can remotely

and z/OS.e (on z9 BC only), z/VM, z/VSE, and Linux on control host Cryptographic coprocessors using either a

System z. z/VSE offers support for clear-key SSL transac- password protected authority signature key pair in a binary

tions only. Current versions of z/OS, z/OS.e, z/VM and file or on a smart card.

Linux on System z offer support for both clear-key and


The optional TKE features are:
secure-key operations.
• TKE 5.2 LIC (#0857) and TKE workstation (#0839)
• TKE Smart Card Reader (#0887)
Continued support for TKE workstation and Smart Card Reader
• TKE additional smart cards (#0888)

TKE 5.2 workstation to enhance security and The Smart Card Reader, which can be attached to a TKE
convenience workstation with the 5.2 level of LIC, is available on the
The Trusted Key Entry (TKE) workstation and the TKE 5.2 System z10, System z9 and z990.
level of Licensed Internal Code are optional features on
the System z10. The TKE 5.2 Licensed Internal Code (LIC)
Cryptographic support for 19-digit PANs
is loaded on the TKE workstation prior to shipment. The
Crypto Express2 feature offers Card Validation Value (CVV)
TKE workstation offers security-rich local and remote key
generation and verification services for 19-digit PANs.
management, providing authorized persons a method of
Industry practices for use of CVV are moving to base CVV
operational and master key entry, identification, exchange,
computations on a 19-digit PAN instead of the 13-digit
separation, and update. The TKE workstation supports
and 16-digit PANs currently in use and supported by ICSF.
connectivity to an Ethernet Local Area Network (LAN) op-
ICSF and Crypto Express2 support use of the 19-digit PAN
erating at 10, or 100 Mbps. Up to three TKE workstations
in the CVV generation and verification services (CSNBCSG
can be ordered.
and CSNBCSV, respectively).
The TKE Workstation is available on the System z10,
System z9, z990 and z890.

29
Support of CVV generation and verification services for 19- Remote Key Loading Benefits
digit PANs, an anti-fraud security feature, is supported by • Provides a mechanism to load initial ATM keys without
the Crypto Express2 feature on the System z10 EC, z9 EC, the need to send technical staff to ATMs.
z9 BC and z990 servers and by z/OS and z/VM for z/OS • Reduces downtime due to key entry errors.
guest exploitation. • Reduces service call and key management costs.
• Improves the ability to manage ATM conversions and
upgrades.
Enabling use of less than 512-bit keys for clear key RSA
operations Integrated Cryptographic Service Facility (ICSF), together
The Crypto Express2 feature supports applications that with Crypto Express2, support the basic mechanisms in
require clear key RSA operations using keys less than Remote Key Loading. The implementation offers a secure
512-bits, including ICSF Callable services and their cor- bridge between the highly secure Common Cryptographic
responding verbs: Digital Signature Verify (CSNDDSV), Architecture (CCA) environment and the various formats
Public Key Encrypt (CSNDPKE), and Public Key Decrypt and encryption schemes offered by the ATM vendors. The
(CSNDPKD). All other ICSF Callable services that require a following ICSF services are offered for Remote Key loading:
Crypto Express2 feature continue to require keys of more
• Trusted Block Create (CSNDTBC)
than 511-bits.
This callable service is used to create a trusted block
Enabling the lower limit for clear key RSA operations may containing a public key and some processing rules.
allow the migration of some additional cryptographic appli-
• Remote Key Export (CSNDRKX)
cations to z10 EC, z9 EC, z9 BC, and z990 servers without
This callable service uses the trusted block to generate
requiring the applications to be rewritten.
or export DES keys for local use and for distribution to
an ATM or other remote device.
Remote Loading of Initial ATM Keys
Refer to Application Programmers Guide, SA22-7522, for
Typically, a new ATM has none of the financial institution's
additional details.
keys installed. Remote Key Loading refers to the pro-
cess of loading Data Encryption Standard (DES) keys to
Automated Teller Machines (ATMs) from a central admin- Improved Key Exchange With Non-CCA Cryptographic

istrative site without the need for personnel to visit each Systems

machine to manually load DES keys. This has been done IBM Common Cryptographic Architecture (CCA) employs
by manually loading each of the two clear text key parts in- Control Vectors to control usage of cryptographic keys.
dividually and separately into ATMs. Manual entry of keys Non-CCA systems use other mechanisms, or may use
is one of the most error-prone and labor-intensive activities keys that have no associated control information. This en-
that occur during an installation, making it expensive for hancement provides the ability to exchange keys between
the banks and financial institutions. CCA systems, and systems that do not use Control Vec-
tors. Additionally, it allows the CCA system owner to define

30
On Demand Capabilities

permitted types of key import and export which can help Capacity on Demand – Temporary Capacity
to prevent uncontrolled key exchange that can open the Just-in-time deployment of System z10 EC Capacity on
system to an increased threat of attack. Demand (CoD) is a new approach from previous System z
and zSeries servers. This new architecture allows:
These enhancements are exclusive to System z10, and
System z9 and are supported by z/OS and z/VM for z/OS • Up to four temporary records to be installed on the CEC
guest exploitation. and active at any given time
• Up to 200 temporary records to be staged on the SE

ISO 16609 CBC Mode T-DES Enhancement • Variability in the amount of resources that can be acti-
vated per record
ISO 16609 CBC Mode T-DES MAC supports the require-
• The ability to control and update records independent of
ments for Message Authentication, using symmetric
each other
techniques. The Integrated Cryptographic Service Facility
• Improved query functions to monitor the state of each
(ICSF) will use the following callable services to access
record
the ISO 16609 CBC Mode T-DES MAC enhancement in the
• The ability to add capabilities to individual records con-
Cryptographic coprocessor:
currently, eliminating the need for constant ordering of
• MAC Generate (CSNBMGN) new temporary records for different user scenarios
• MAC Verify (CSNVMVR) • Permanent LIC-CC upgrades to be performed while
• Digital Signature Verify (CSNDDSV) temporary resources are active

ISO 16609 CBC mode T-DES MAC is accessible through These capabilities allow you to access and manage
ICSF function calls made in the Cryptographic Adapter processing capacity on a temporary basis, providing
Segment 3 Common Cryptographic Architecture (CCA) increased flexibility for on demand environments. The CoD
code. This enhancement is exclusive to System z10 and offerings are built from a common Licensed Internal Code
System z9 and supported by z/OS 1.7 or higher. – Configuration Code (LIC-CC) record structure. These
Temporary Entitlement Records (TERs) contain the infor-
mation necessary to control which type of resource can be
System z10 Cryptographic migration
accessed and to what extent, how many times and for how
• The Crypto Express2 feature is supported on the System
long, and under what condition – test or real workload.
z10 and can be carried forward on an upgrade to the
System z10. Use of this information gives the different offerings their
personality.
• Customers must use TKE 5.2 workstations to control the
System z10.
• TKE 5.0 and 5.1 workstations (FC 0839) may be used to
control z9 EC, z9 BC, and z990 servers.

31
Three temporary-capacity offerings will be available on business conditions do not justify a permanent upgrade.
February 26, 2008: An On/Off CoD record allows you to temporarily add CP
capacity or any and all specialty engine types (zIIP, zAAP,
Capacity Back Up (CBU) – Temporary access to dormant
SAP, IFL, ICF) up to the following limits:
processing units (PUs), intended to replace capacity lost
within the enterprise due to a disaster. CP capacity or any • The quantity of temporary CP capacity ordered is limited
and all specialty engine types (zIIP, zAAP, SAP, IFL, ICF) by the quantity of purchased CP capacity (permanently
can be added up to what the physical hardware model active plus unassigned).

can contain for up to 10 days for a test activation or 90 • The quantity of temporary IFLs ordered is limited by

days for a true disaster recovery. Each CBU record comes quantity of purchased IFLs (permanently active plus
unassigned).
with a default of five test activations. Additional test activa-
• Temporary use of unassigned CP capacity or unas-
tions may be ordered in groups of five but a record can not
signed IFLs will not incur a hardware charge.
contain more than 15 test activations. Each CBU record
• The quantity of permanent zIIPs plus temporary zIIPs
provides the entitlement to these resources for a fixed
can not exceed the quantity of purchased (permanent
period of time, after which the record is rendered useless.
plus unassigned) CPs plus temporary CPs and the
This time period can span from one to five years and is
quantity of temporary zIIPs can not exceed the quantity
specified through ordering quantities of CBU years.
of permanent zIIPs.
Capacity for Planned Events (CPE) – Temporary access • The quantity of permanent zAAPs plus temporary zAAPs
to dormant PUs, intended to replace capacity lost within can not exceed the quantity of purchased (permanent
the enterprise due to a planned event such as a facility plus unassigned) CPs plus temporary CPs and the
quantity of temporary zAAPs can not exceed the quan-
upgrade or system relocation. This is a new offering and
tity of permanent zAAPs.
is available only on the System z10 EC. CPE is similar to
• The quantity of temporary ICFs ordered is limited by the
CBU in that it is intended to replace lost capacity, however
quantity of permanent ICFs as long as the sum of per-
it differs in its scope and intent. Where CBU addresses
manent and temporary ICFs is less than or equal to 16.
disaster recovery scenarios that can take up to three
months to remedy, CPE is intended for short-duration Although the System z10 EC will allow up to four temporary
events lasting up to three days, maximum. Each CPE records of any type to be installed, only one temporary On/
record, once activated, gives you access to all dormant Off CoD record may be active at any given time. An On/Off
PUs on the machine that can be configured in any com- CoD record may be active while other temporary records
bination of CP capacity or specialty engine types (zIIP, are active.
zAAP, SAP, IFL, ICF).
Capacity provisioning – An installed On/Off CoD record
On/Off Capacity on Demand (On/Off CoD) – Temporary is a necessary prerequisite for automated control of tem-
access to dormant PUs, intended to augment the existing porary capacity through z/OS MVS Capacity Provision-
capacity of a given system. On/Off CoD helps you contain ing. z/OS MVS Capacity provisioning allows you to set up
workload spikes that may exceed permanent capacity rules defining the circumstances under which additional
such that Service Level Agreements cannot be met and capacity should be provisioned in order to fulfill a specific

32
business need. The rules are based on criteria, such as: Further, with the Express option on CIU, an upgrade may
a specific application, the maximum additional capacity be made available for installation as fast as within a few
that should be activated, time and workload conditions. hours after order submission.
This support provides a fast response to capacity changes
Permanent upgrades: Orders (MESs) of all PU types and
and ensures sufficient processing power will be available
memory for System z10 EC servers that can be delivered
with the least possible delay even if workloads fluctuate.
by Licensed Internal Code, Control Code (LIC-CC) are
See z/OS MVS Capacity Provisioning User’s Guide (SA33-
eligible for CIU delivery. CIU upgrades may be performed
8299) for more information.
up to the maximum available processor and memory
On/Off CoD Test – On/Off CoD allows for a no-charge test. resources on the installed server, as configured. While
No IBM charges are assessed for the test, including IBM capacity upgrades to the server itself are concurrent,
charges associated with temporary hardware capacity, your software may not be able to take advantage of the
IBM software, or IBM maintenance. This test can be used increased capacity without performing an Initial Program-
to validate the processes to download, stage, install, acti- ming Load (IPL).
vate, and deactivate On/Off CoD capacity nondisruptively.
Each On/Off CoD-enabled server is entitled to only one no- Plan Ahead and Concurrent Conditioning
charge test. This test may last up to a maximum duration
Concurrent Conditioning configures a system for hot plug-
of 24 hours commencing upon the activation of any capac-
ging of I/O based on a future target configuration. Con-
ity resources contained in the On/Off CoD record. Activa-
current Conditioning of the z10 EC server I/O is reduced
tion levels of capacity may change during the 24 hour test
by the fact that all I/O cards plugging into the z10 EC I/O
period. The On/Off CoD test automatically terminates at
cage are hot-pluggable. But I/O cages cannot be installed
the end of the 24 hours period. In addition to validating
concurrently to a z10 EC server. This means that the only
the On/Off CoD function within your environment, you may
I/O to be conditioned is the I/O cage itself. The question of
choose to use this test as a training session for your per-
whether or not to concurrently condition a cage is a very
sonnel who are authorized to activate On/Off CoD.
important consideration, especially with the rapid change
in the IT environment as well as the technology.
Capacity on Demand – Permanent Capacity
The Plan Ahead process can easily identify the customer

Customer Initiated Upgrade capacity – Technology on configuration that is required to meet future needs. The

demand result of concurrent conditioning is the capability to enable


a flexible IT infrastructure that can accommodate unpre-
Customer Initiated Upgrade (CIU) facility: When your busi-
dictable growth in a low risk, nondisruptive way. Depend-
ness needs additional capacity quickly, Customer Initiated
ing on the required Concurrent Conditioning, there should
Upgrade (CIU) is designed to deliver it. CIU is designed
be minimal cost associated with dormant z10 EC capacity.
to allow you to respond to sudden increased capacity
This creates an attractive option for businesses to quickly
requirements by requesting a System z10 EC PU and/or
respond to changing environments, bringing new applica-
memory upgrade via the Web, using IBM Resource Link™,
tions online or growing existing applications without dis-
and downloading and applying it to your System z10 EC
rupting users.
server using your system’s Remote Support connection.

33
Reliability, Availability, and Security Availability Functions

The System z10 EC is designed to deliver industry lead- With the z10 EC, significant steps have been taken in the
ing reliability, availability and security our customers have area of server availability with a focus on reducing pre-
come to expect from System z servers. System z10 EC planning requirements. Pre-planning requirements are min-
RAS is designed to reduce all sources of outages by imized by delivering and reserving 16 GB for HSA so the
reducing unscheduled, scheduled and planned outages. maximum configuration capabilities can be exploited. And
Planned outages are further designed to be reduced by with the introduction of the ability to seamlessly include
eliminating pre-planning requirements. These features are such events as creation of LPARs, inclusion of logical
designed to reduce the need for a Power-on-Reset (POR) subsystems, changing logical processor definitions in an
and help eliminate the need to deactivate/activate/IPL a LPAR, and the introduction of cryptography into an LPAR.
logical partition. Features that carry forward from previous generation pro-
cessors include the ability to dynamically enable I/O, and
the dynamic swapping of processor types.

Enhanced Book Availability


With proper planning, z10 EC is designed to allow a
single book, in a multi-book server, to be non-disrup-
tively removed from the server and re-installed during an
upgrade or repair action. To minimize the effect on current
workloads and applications, you should ensure that you
have sufficient inactive physical resources on the remain-
ing books to complete a book removal.

For customers configuring for maximum availability we rec-


ommend to purchasing models with one additional book.
To ensure you have the appropriate level of memory, you
may want to consider the selection of the Flexible Memory
Option features to provide additional resources when
completing an Enhanced Book Availability action or when
considering plan ahead options for the future. Enhanced
Book Availability may also provide benefits should you
choose not to configure for maximum availability. In these
cases, you should have sufficient inactive resources on
the remaining books to contain critical workloads while
completing a book replacement. Contact your IBM rep-
resentative to help you determine and plan the proper
configuration to support your workloads when using non-
disruptive book maintenance.

34
Enhanced Book Availability is an extension of the support provided through the purchase of preplanned memory
for Concurrent Book Add (CBA) delivered on z990. CBA features along with the purchase of your memory entitle-
makes it possible to concurrently upgrade a server by ment. In most cases, this implementation provides a
integrating a second, third, or fourth book into the server lower-cost solution compared to z9 EC. Flexible memory
without necessarily affecting application processing. configurations are available on Models E26, E40, E56,
and E64 only and range from 32 GB to 1136 GB, model
The following scenarios prior to the availability of EBA
dependent.
would require a disruptive customer outage. With EBA
these upgrade and repair procedures can be performed
concurrently without interfering with customer operations. Redundant I/O Interconnect
z10 EC with Redundant I/O Interconnect is designed to
allow you to replace a book or respond to a book failure
Concurrent Physical Memory Upgrade
and retain connectivity to resources. In the event of a
Allows one or more physical memory cards on a single
failure or customer initiated action such as the replace-
book to be added, or an existing card to be upgraded
ment of an HCA2-C fanout card or book, the z10 EC is
increasing the amount of physical memory in the system.
designed to provide access to your I/O devices through
another InfiniBand Multiplexer (IFB-MP) to the affected I/O
Concurrent Physical Memory Replacement domains. This is exclusive to System z10 EC and z9 EC.
Allows one or more defective memory cards on a single
book to be replaced concurrent with the operation of the
Enhanced Driver Maintenance
system.
One of the greatest contributors to downtime during
planned outages is Licensed Internal Code (LIC) updates.
Concurrent Defective Book Replacement When properly configured, z10 EC is designed to permit
Allows the concurrent repair of a defective book when that select planned LIC updates. A new query function has
book is operating degraded due to errors such as multiple been added to validate LIC EDM requirements in advance.
defective processors. Enhanced programmatic internal controls have been
added to help eliminate manual analysis by the service
Enhanced Book Availability is exclusive to z10 EC and
team of certain exception conditions. On the System z9,
z9 EC.
the PR/SM code had a restriction of only one ‘From’ EDM
level. With the z10 EC, PR/SM code has been enhanced to
Flexible Memory Option
allow multiple EDM ‘From’ sync points. Automatic apply of
Flexible memory was first introduced on the z9 EC as part EDM licensed internal change requirements is now limited
of the design changes and offerings to support enhanced to EDM and the licensed internal code changes update
book availability. Flexible memory provides the additional process. Previously, these requirements were also being
resources to maintain a constant level of memory when applied during actions like System Information and Alter-
replacing a book. On z10 EC, the additional resources nate Support Element mirroring.
required for the flexible memory configurations are

35
Dynamic Oscillator Switchover Redundant Cage Controllers: The Power and Service
The z10 EC has two oscillator cards, a primary and a Control Network features redundant Cage Controllers for
backup. For most cases, should a failure occur on the pri- Logic and Power control. This design enables nondisrup-
mary oscillator card, the backup can detect it, switch over, tive service to the controllers and virtually eliminates cus-
and provide the clock signal to the system transparently, tomer scheduled outage.
with no system outage. Previously, in the event of a failure
Auto-Switchover for Support Element (SE): The z10
of the active oscillator, a system outage would occur, the
EC has two Support Elements. In the event of failure on
subsequent system Power On Reset (POR) would select
the Primary SE, the switchover to the backup is handled
the backup, and the system would resume operation.
automatically. There is no need for any intervention by the
Dynamic Oscillator Switchover is exclusive to System
Customer or Service Representative.
z10 EC and System z9.

Concurrent Memory Upgrade


Transparent Sparing
This function allows adding memory concurrently, up to
The z10 EC offers two PUs reserved as spares per server.
the maximum amount physically installed. In addition,
In the case of processor failure, these spares are used
the Enhanced Book Availability function also enables a
for transparent sparing. On z10 EC sparing happens on
memory upgrade to an installed z10 EC book in a multi-
a core granularity rather than chip granularity as on z990
book server.
and z9 (for which “chip” equaled “2 cores”).

Service Enhancements
Concurrent Maintenance
z10 EC service enhancements designed to avoid sched-
Concurrent Service for I/O features: All the features that
uled outages include:
plug into the I/O Cage are able to be added and replaced
concurrent with system operation. This virtually eliminates • Concurrent firmware fixes

any need to schedule outage for service to upgrade the • Concurrent driver upgrades
I/O subsystem on this cage. • Concurrent parts replacement
• Concurrent hardware upgrades
Upgrade for Coupling Links: z10 EC has concurrent
• DIMM FRU indicators
maintenance for the ISC-3 daughter card. Also, Coupling
• Single processor core checkstop
Links can be added concurrently. This eliminates a need
• Single processor core sparing
for scheduled downtime in the demanding sysplex envi-
• Point-to-Point SMP Fabric (not a ring)
ronment.
• FCP end-to-end checking
Cryptographic feature: The Crypto Express2 feature • Hot swap of ICB-4 and InfiniBand hub cards
plugs in the I/O cage and can be added or replaced con- • Redundant 100 Mb Ethernet service network with VLAN
currently with system operation.

36
Environmental Enhancements

Power and cooling discussions have entered the budget IBM Systems Director Active Energy Manager
planning of every IT environment. As energy prices have IBM Systems Director Active Energy Manger (AEM) is a
risen and utilities have restricted the amount of power building block which enables customers to manage actual
usage, it is important to review the role of the servers to power consumption and resulting thermal loads IBM
balance IT spending. servers place in the data center. On the z10 EC, power
monitoring information can be fed into the IBM Systems
Workload consolidation can help to balance IT budget
Director AEM for Linux on System z, a plug in feature of
spending. The z10 EC is designed to reduce energy
IBM Director. AEM for Linux on System z allows tracking of
usage by greater than 80% and save floor space by
trends for both the z10 EC as well as multiple server plat-
greater than 85% when used to consolidate x86 servers***.
forms. With this trend analysis, a data center administrator
With increased capacity the z10 EC virtualization capabili-
can properly size power inputs and more accurately plan
ties can help to support hundreds of virtual servers in a
data center consolidation or modification projects.
single 2.83 square meters footprint.

Power Monitoring
The “mainframe gas gauge” feature introduced on the
System z9 servers, provides power and thermal informa-
tion via the System Activity Display (SAD) on the Hardware
Management Console and will be available on the z10 EC
giving a point in time reference of the information. The cur-
rent total power consumption in watts and BTU/hour as
well as the air input temperature will be displayed.

Power Estimation Tool


Only the System z10 EC and System z9 servers provide
a tool available on IBM Resource Link which provides the
user an estimate as to the anticipated power consumption
of a particular machine model and its associated configu-
ration. A user will input the machine model, memory, and
I/O configuration and the tool will output an estimate of the
power requirements needed for this system.

37
Parallel Sysplex Cluster Technology

Parallel Sysplex clustering is designed to bring the power handling the work. This dynamic workload balancing,
of parallel processing to business-critical System z10 EC, along with the capability to have read/write access data
System z9 and z990 applications. A Parallel Sysplex clus- from anywhere in the Parallel Sysplex cluster, provides
ter consists of up to 32 z/OS images coupled to one or scalability and availability. When configured properly, a
more Coupling Facilities (CFs or ICFs) using high-speed Parallel Sysplex cluster is designed with no single point
specialized links for communication. The Coupling Facili- of failure and can provide customers with near continu-
ties, at the heart of the Parallel Sysplex cluster, enable ous application availability over planned and unplanned
high speed, read/ write data sharing and resource sharing outages.
among all the z/OS images in a cluster. All images are also
Coupling Facility Control Code (CFCC) Level 15 is avail-
connected to a Sysplex Timer® or by implementing the
able on System z10 EC, System z9 EC and z9 BC.
Server Time Protocol (STP), so that all events can be prop-
erly sequenced in time. With the introduction of the z10 EC, we have the concept
of n-2 on the hardware as well as the software. The z10 EC
participates in a Sysplex with System z9, z990 and z890
CF only and currently supports z/OS 1.7 and higher.

For detailed information on IBM’s Parallel Sysplex technol-


ogy, visit our Parallel Sysplex home page at http://www-
03.ibm.com/systems/z/pso/.

Coupling Facility Configuration Alternatives


IBM offers multiple options for configuring a functioning
Coupling Facility:
Parallel Sysplex Resource Sharing enables multiple
• Standalone Coupling Facility: The standalone CF
system resources to be managed as a single logical
provides the most “robust” CF capability, as the CPC is
resource shared among all of the images. Some examples wholly dedicated to running the CFCC microcode — all
of resource sharing include JES2 Checkpoint, GRS “star,” of the processors, links and memory are for CF use
and Enhanced Catalog Sharing; all of which provide sim- only. A natural benefit of this characteristic is that the
plified systems management, increased performance and/ standalone CF is always failure-isolated from exploiting
or scalability. z/OS software and the server that z/OS is running on for
environments without System-Managed CF Structure
Although there is significant value in a single footprint and
Duplexing. While there is no unique standalone cou-
multi-footprint environment with resource sharing, those pling facility model offered with the z10 EC, customers
customers looking for high availability must move on to can achieve the same physically isolated environment
a database data sharing configuration. With the Parallel as on prior mainframe families by ordering a z10 EC,
Sysplex environment, combined with the Workload Man- z9 EC, z9 BC, and z990 with PUs characterized as
ager and CICS TS, DB2 or IMS™, incoming work can be Internal Coupling Facilities (ICFs). There are no software
dynamically routed to the z/OS image most capable of charges associated with such a configuration.

38
• Internal Coupling Facility (ICF): Customers considering When connecting between System z10 EC, System z9
clustering technology can get started with Parallel Sysplex and z990 servers the links must be configured to operate
technology at a lower cost by using an ICF instead of in Peer Mode. This allows for higher data transfer rates
purchasing a standalone Coupling Facility. An ICF feature to and from the Coupling Facilities. The peer link acts
is a processor that can only run Coupling Facility Control simultaneously as both a CF Sender and CF Receiver link,
Code (CFCC) in a partition. Since CF LPARs on ICFs are
reducing the number of links required. Larger and more
restricted to running only CFCC, there are no IBM software
data buffers and improved protocols may also improve
charges associated with ICFs. ICFs are ideal for Intelligent
long distance performance.
Resource Director and resource sharing environments as
well as for data sharing environments where System-Man-
z10 EC
aged CF Structure Duplexing is exploited. PSIFB 6 GBps
Up to 150 meters

PSIFB 3 GBps
Up to 150 meters
z9 EC, z9 BC
System-Managed CF Structure Duplexing HCA2-C HCA2-C Dedicated CF only

System-Managed Coupling Facility (CF) Structure Duplex- New ICB-4 cable


ICB-4 2 GBps
MBA 10 meters
ing provides a general purpose, hardware-assisted, easy- z10 EC, z9 EC, z9 BC, z990, z890

to-exploit mechanism for duplexing CF structure data. This ISC-3


ISC-3

ISC-3 2 Gbps
IFB-MP

provides a robust recovery mechanism for failures such as HCA2-C ISC-3


ISC-3 Up to 100 Km

loss of a single structure or CF or loss of connectivity to a


I/O Cage

single CF, through rapid failover to the backup instance of z10 EC, z9 EC, z9 BC, z990, z890

the duplexed structure pair.


The IBM System z10 EC introduces InfiniBand coupling
link technology designed to provide increased bandwidth
at greater cable distances. At introduction, InfiniBand
z/OS ICF ICF z/OS
coupling links complement and do not replace the current
coupling links (ICB-4, ISC-3) which continue to work in cur-
z10 EC/z9 EC/z9 BC/z990/z890 z10 EC/z9 EC/z9 BC/z990/z890
rent System z and zSeries server environments.
A robust failure recovery capability
Other advantages of Parallel Sysplex using InfiniBand
Note: An example of two systems in a Parallel Sysplex cluster with
(PSIFB):
CF Duplexing

• InfiniBand coupling links also provide a new ability to


Parallel Sysplex Coupling Connectivity define up to 16 CHPIDs on a single PSIFB port, allow-
ing physical coupling links to be shared by multiple
The Coupling Facilities communicate with z/OS images
sysplexes. This also provides additional subchannels for
in the Parallel Sysplex environment over specialized
Coupling Facility communication, improving scalability,
high-speed links. As processor performance increases,
and reducing contention in heavily utilized system con-
it is important to also use faster links so that link perfor-
figurations. It also allows for one CHPID to be directed
mance does not become constrained. The performance, to one CF, and another CHPID directed to another CF on
availability and distance requirements of a Parallel Sysplex the same target server, using the same port.
environment are the key factors that will identify the appro-
priate connectivity option for a given configuration.

39
• Like other coupling links, external InfiniBand coupling Coupling Link Connectivity
links are also valid to pass time synchronization signals The z10 EC supports the following Coupling link features:
for Server Time Protocol (STP). Therefore the same
coupling links can be used to exchange timekeeping • Parallel Sysplex InfiniBand (PSIFB) when available,
information and Coupling Facility messages in a Parallel will connect a z10 EC to a z10 EC at 6 GBps and a z10
Sysplex environment. EC to a z9 dedicated CF at 3 GBps. This is point to point
connectivity supporting up to 150 meters (492 ft).
• The IBM System z10 EC also takes advantage of
InfiniBand as a higher-bandwidth replacement for the • Integrated Cluster Bus-4 (ICB-4) in Peer mode only.
Self-Timed Interconnect (STI) I/O interface features ICB-4 connects a z10 EC to z9 EC, z9 BC, z990 or z890.
found in prior System z servers. The maximum distance between the two servers is 7
meters (maximum cable length is 10 meters). The link
The IBM System z10 EC will support up to 32 PSIFB links bandwidth is 2 GBps. The maximum number of ICB-4
as compared to 16 PSIFB links on System z9 servers. For links is 16 per z10 EC. ICB-4 supports transmission of
either z10 EC or z9, there must be less then or equal to a STP timekeeping information. ICB-4 is not supported on
total of 32 PSIFBs and ICB-4 links. z10 EC Model E64.
• Inter-System Channel-3 (ISC-3) in Peer mode only.
InfiniBand coupling links are CHPID type CIB.
ISC-3 links can be used to connect to other System z
servers. They are fiber links that support a maximum
Type Description Use Link Distance z10 z10
distance of 10 km, 20 km with RPQ 8P2197, and 100 km
data rate Max Max
PSIFB* 12x IB-DDR z10 to z10 6 GBps 150 meters 32* 64
with Dense Wave Division Multiplexing (DWDM). ISC-3s
z10 to z9 CF 3 GBps** (492 ft)*** CHPIDS operate in single mode only. Link bandwidth is 200
IC Internal Internal Internal N/A 32 64 MBps for distances up to 10 km, and 100 MBps when
Coupling communication speeds CHPIDS
Channel RPQ 8P2197 is installed. Each port operates at 2 Gbps.
ICB-4 Copper z10 EC 2 GBps 10 meters*** 16 64 Ports are ordered in increments of one. The maximum
connection z9 EC, z9 BC (33 feet) CHPIDS
between z990, z890
number of ISC-3 links per z10 EC is 48. ISC-3 supports
OS and CF transmission of STP timekeeping information.
ISC-3 Fiber z10 EC 2 Gbps 10 km 48 64 • Internal Channel (IC) in Peer mode IC. The Internal
connection z9 EC, z9 BC unrepeated CHPIDS
between z990, z890 (6.2 miles) Coupling channel emulates the Coupling Links providing
OS and CF 100 km
repeated connectivity between images within a single server. No
hardware is required, however a minimum of two CHPID
• The maximum number of Coupling Links combined
numbers must be defined in the IOCDS. The maximum
cannot exceed 64 per server (PSIFB, IC, ICB-4, ISC-3).
number of IC links is 32. IC links provide the fastest
There is a maximum of 64 Coupling CHPIDs, including
Parallel Sysplex connectivity.
CIB, per server.
• For each MBA fanout installed for ICB-4s, the number of
possible customer HCA fanouts is reduced by one
* Each link supports definition of multiple CIB CHPIDs, up to 16 per fanout
** z10 EC negotiates to 3 GBps (12x IB-SDR) when connected to a System
z9 Dedicated CF
*** 3 meters (10 feet) reserved for internal routing and strain relief

40
Server Time Protocol (STP) Prior to the introduction of STP, a Sysplex Timer was used
Server Time Protocol (STP) is designed to provide the to synchronize the time of attached servers in an External
capability for multiple servers and Coupling Facilities to Time Reference (ETR) network. STP can help provide
maintain time synchronization with each other, without functional and economic benefits when compared to the
requiring an IBM Sysplex Timer. Sysplex Timer. The possible benefits provided by STP are:

Server Time Protocol is designed to help: • Help eliminate infrastructure requirements, such as
energy consumption and floor space, needed to support
• Reduce cost
the Sysplex Timers
• Simplify your infrastructure
• Help eliminate maintenance costs associated with the
• Improve systems management Sysplex Timers
• Improve support for Geographically Dispersed Parallel • Help reduce the fiber optic infrastructure requirements
Sysplex™ (GDPS®) in a multi-site configuration. Dedicated links may not be
• Improve time synchronization required to transmit timing information as they are with
• Accommodate concurrent migration Sysplex Timers. STP can use existing Coupling links.
• Coexist with Sysplex Timer based timing network • STP supports a multi-site timing network of up to 100
km without requiring an intermediate site. Previously, an
The Server Time Protocol (STP) feature is designed to be
intermediate site was recommended to locate one of
the supported method for maintaining time synchronization the Sysplex Timers when the multi-site sysplex distance
between IBM System z10, System z9, z990, z890 servers exceeded 40 km (25 miles).
and Coupling Facilities (CFs). To enable these servers and • Allow more stringent synchronization between servers
CFs for STP, the STP feature—Licensed Internal Code— and CFs using short communication links, compared
must be installed and enabled. to servers and CFs using long distance communication
links
STP supports the ability to:
• Help improve systems management by providing auto-
• Initialize the time either manually or by using an External matic adjustment of Daylight Saving Time offset
Time Source (ETS). The ETS can be a dial out time ser-
The STP design introduces a new concept called Coordi-
vice or a connection to a Network Time Protocol (NTP)
server. Accessing an ETS allows the time of the STP net- nated Timing Network (CTN). A CTN is a collection of serv-

work to be set to an international time standard such as ers and Coupling Facilities that are time synchronized to a
Coordinated Universal Time (UTC). time value called Coordinated Server Time. The CTN con-
• Initialize the Time Zone offset, Daylight Savings Time cept was introduced to help meet two key goals of existing
(DST) offset and Leap seconds offset. IBM System z environments: Concurrent migration from an
• Schedule periodic dial-outs to a time service to maintain existing ETR network to a timing network using STP and
accurate time. If an NTP server is used as the ETS, no the ability of servers and CFs that cannot support STP to
scheduling is required because STP will periodically be synchronized in the same network as servers that sup-
access the NTP server to maintain accurate time. port STP (z10 EC, z9 EC, z9 BC, z990, and z890).
• Adjust time by up to +/- 60 seconds. This improves upon
the Sysplex Timer’s capability of adjusting time by up to
+/- 4.999 seconds.

41
NTP Client support for STP Message Time Ordering (Sysplex Timer Connectivity to Coupling
If you have specific requirements to provide accurate time Facilities)
relative to some external time standard for data process- As processor and Coupling Facility link technologies have
ing applications, you need to consider using the external improved, the requirement for time synchronization toler-
time source (ETS) function of STP. The ETS function is only ance between systems in a Parallel Sysplex environment
available when an STP-only CTN is configured. One of the has become ever more rigorous. In order to enable any
ways to configure an ETS for STP is to obtain accurate exchange of timestamped information between systems
time from an NTP server. Simple Network Time Protocol in a sysplex involving the Coupling Facility to observe the
(SNTP) client support is added to the STP code on the correct time ordering, time stamps are now included in
System z10 and System z9 Support Element (SE) to inter- the message-transfer protocol between the systems and
face with NTP servers. NTP client support can help meet the Coupling Facility. Therefore, when a Coupling Facility
the requirements of customers who need to provide the is configured on any System z10 or System z9, the Cou-
same time across heterogeneous platforms in an enterprise. pling Facility will require connectivity to the same 9037
Dialing out provides time accuracy for the System z10 and Sysplex Timer or Server Time Protocol (STP) configured
System z9 platforms only, whereas attaching to an NTP Coordinated Timing Network (CTN) that the systems in its
server is designed for time accuracy as well as same time Parallel Sysplex cluster are using for time synchroniza-
across heterogeneous platforms. tion. If the ICF is on the same server as a member of its
Parallel Sysplex environment, no additional connectivity is
NTP server
Stratum 1
required, since the server already has connectivity to the
non-System z
Remote HMC
(Browser)
Sysplex Timer.
time synchronized
servers

Ethernet Corporate
However, when an ICF is configured on any z10 EC which
Switch network
HMC does not host any systems in the same Parallel Sysplex
cluster, it is necessary to attach the server to the 9037
Sysplex Timer or implement STP.
SNTP
z990
z10 EC
Arbiter
S2 SNTP
PTS/CTS
S1
Parallel Sysplex Professional Services
IBM provides extensive services to assist customers in
z9 BC
(BTS)
S2
Coordinated
TIming
migrating their environments and applications to ben-
Network
efit from Parallel Sysplex clustering. A basic set of IBM
services is designed to help address planning and early
implementation requirements. These services can help you
Even though the z990 and z890 do not support configura- reduce the time and costs of planning a Parallel Sysplex
tion of NTP as an ETS, they can participate in an STP-only environment and moving it into production.
CTN that has a System z10 or System z9 configured to use
IBM Global Services has a variety of IT and GDPS Services.
NTP as an ETS.
http://www-03.ibm.com/systems/z/pso/services.html.
For more details, visit the STP Web site at:
www-03.ibm.com/systems/z/pso/stp.html.

42
Fiber Quick Connect for FICON LX
Environments

GDPS Fiber Quick Connect (FQC), an optional feature on z10 EC,


GDPS is a multi-site or single-site end-to-end application is now being offered for all FICON LX (single mode fiber)
availability solution that provides the capability to manage channels, in addition to the current support for ESCON.
remote copy configuration and storage subsystems FQC is designed to significantly reduce the amount of
(including IBM TotalStorage ), to automate Parallel Sysplex
®
time required for on-site installation and setup of fiber
operation tasks and perform failure recovery from a single optic cabling. FQC facilitates adds, moves, and changes
point of control. of ESCON and FICON LX fiber optic cables in the data
center, and may reduce fiber connection time by up to
GDPS helps automate recovery procedures for planned
80%.
and unplanned outages to provide near-continuous avail-
ability and disaster recovery capability. FQC is for factory installation of IBM Facilities Cabling
Services – Fiber Transport System (FTS) fiber harnesses
For additional information on GDPS, visit:
for connection to channels in the I/O cage. FTS fiber har-
http://www-03.ibm.com/systems/z/gdps/.
nesses enable connection to FTS direct-attach fiber trunk
cables from IBM Global Technology Services.

Note: FQC supports all of the ESCON channels and all of


the FICON LX channels in all of the I/O cages of the server.

43
System z10 EC Configuration Details

Maximum of 1024 CHPIDs; 3 I/O cages (28 slots each) = Processor Unit Features
84 I/O slots
Model Books/ CPs IFLs zAAPs ICFs Standard Standard
All features that require I/O slots, and ICB-4 features, are PUs uIFLs zIIPs SAPs Spares
included in the following table: E12 1/17 0-12 0-12 0-6 0-12 3 2
0-11 0-6

Feature Min Max Maximum Increments Purchase E26 2/34 0-26 0-26 0-13 0-16 6 2
# of # of Connections per Feature Increm. 0-25 0-13
features features E40 3/51 0-40 0-40 0-20 0-16 9 2
ESCON, 01 69 1024 16 channels 4 channels 0-39 0-20
16 port channels 1 reserved
as a spare E56 4/68 0-56 0-56 0-28 0-16 10 2
0-55 0-28
FICON 0 1
84 336 4 channels 4 channels
Express4 channels E64 4/77 0-64 0-64 0-32 0-16 11 2
0-63 0-32
FICON 0 1
84 336 4 channels 4 channels
Express2* channels A minimum of one CP, IFL, or ICF must be purchased on every model.
One zAAP and one zIIP may be purchased for each CP purchased.
FICON 0 1
60 120 2 channels 2 channels
Express* channels
ICB-4 01 8 16 links2, 3 2 links 1 link Standard Memory
ISC-3 0 1
12 48 links 2
4 links 1 link
z10 EC Model Minimum Maximum
HCA2-O 01 16 32 links3 2 links 2 links
E12 16 GB 352 GB
OSA- 0 24 48 ports 2 ports for 2 ports
Express3 10 GbE E26 16 GB 752 GB
OSA- 0 24 48 ports 2 or 1 2 ports/ E40 16 GB 1136 GB
Express2 (10 GbE has 1) 1 port
E56 16 GB 1520 GB
Crypto 0 8 16 PCI-X 2 PCI-X 2 PCI-X
Express2 adapters adapters adapters 5 E64 16 GB 1520 GB
1. Minimum of one I/O feature (ESCON, FICON) or one Coupling Link Memory cards include: 8 GB, 16 GB, 32 GB, 48 GB and 64 GB. (Fixed HSA
(PSIFB, ICB , ISC-3) required. not included).
2. Maximum number of Coupling Links combined (IFBs, ICB-4s, and active
ISC-3 links) cannot exceed 64 per server.
3. ICB-4 and 12x IB-DDR are not included in the maximum feature count Channels
for I/O slots but are included in the CHPID count.
4. Initial order of Crypto Express2 is 4 PCI-X adapters (two features). Each
z10 EC Model E12 E26 E40 E56 E64
PCI-X adapter can be configured as a coprocessor or an accelerator.
* Available only when carried forward on an upgrade from z990 or z9 EC. ESCON Min 0 0 0 0 0
ESCON Max 960 1024 1024 1024 1024

FICON Express4 Min


FICON Express2 Min 0 0 0 0 0
FICON Express Min

FICON Express4 Max 256 336 336 336 336


FICON Express2 Max 256 336 336 336 336

FICON Express Max 120 120 120 120 120


A minimum of one I/O feature (ESCON, FICON) or one Coupling required.
*Available only when carried forward on an upgrade from z9 EC or z990.

44
Coupling Links
Z-Frame A-Frame
Links PSIFB ICB-4 ISC-3 IC Max Links 2
I/O cages
0-32* 0-16* 0-48 0-32 Total External + CEC
Except Internal links = 64
E64
*Maximum of 32 IFB + ICB-4 links on System z10 EC. ICB-4 not supported
on Model E64 2nd 1st
I/O Cage I/O Cage

Cryptographic Features
I/O Feature Type Features Maximum
Crypto Express2 Feature*
ESCON 48 720 channels
Minimum 0
FICON Express2/4 48 192 channels
Maximum 8
*Each feature has 2 PCI-X adapters; each adapter can be configured as a
FICON Express 48 96 channels
coprocessor or an accelerator.
OSA-Express2 24 48 ports
OSA-Express3 LR 24 48 ports
OSA-Express3 and OSA-Express2 Features
Crypto Express2 8 16 adapters
Features Min Max Maximum Increments Purchase
Connections per Features Increments
Z-Frame A-Frame
OSA-Express3 0 24 96 2 ports for 2 ports
10 GbE 3
I/O cages
3rd
OSA-Express2 2 24 48 2 or 1 2 ports/ CEC
I/O Cage
(10 GbE has 1) 1 port

2nd 1st
z10 EC Frame and I/O Configuration Content: Planning for I/O
I/O Cage I/O Cage
The following diagrams show the capability and flexibility
built into the I/O subsystem. All machines are shipped with
two frames, the A-Frame and the Z-Frame, and can have I/O Feature Type Features Maximum
between one and three I/O cages. Each I/O cage has 28 ESCON 69 1024 channels
I/O slots. FICON Express2/4 84 336 channels
Z-Frame A-Frame FICON Express 60 120 channels

OSA-Express2 24 48 ports
Single
CEC I/O cage OSA-Express3 LR 24 48 ports

Crypto Express2 8 16 adapters


I/O
Cage General Information:
• ESCON configured in 4-port increments. Up to a maximum 69
cards, 1024 channels.
• OSA-Express2 can be Gigabit Ethernet (GbE), 1000BASE-T
I/O Feature Type Features Maximum Ethernet or 10 GbE.
ESCON 24 360 channels • OSA-Express can be Gigabit Ethernet (GbE), 1000BASE-T
Ethernet or Fast Ethernet.
FICON Express2/4 24 96 channels
• If ICB-3 is required on the system, it will use up a single I/O slot
FICON Express 24 48 channels for every 2 ICB-3 to accommodate the STI-3 card.
Note: In the first and second I/O cage, the last domain in the I/O
OSA-Express2 24 48 ports cage is normally used for ISC-3 and ICB-3 links. When the first 6
domains in an I/O cage are full, additional I/O cards will be installed
OSA-Express3 LR 24 48 ports in the next I/O cage. When all the first 6 domains in all I/O cages
are full and no Coupling link or PSC cards are required, the last
Crypto Express2 8 16 adapters
domain in the I/O cage will be used for other I/O cards making a
total of 28 per cage.

45
System z10 EC Physical Characteristics

System z10 EC Environmentals z10 EC and z9 EC Dimension Comparison

Model 1 I/O Cage 2 I/O Cage 3 I/O Cage System System


E12 9.70 kW 13.26 kW 13.50 kW z10 EC z9 EC

E26 13.77 kW 17.51 kW 21.17 kW # of Frames 2 Frames 2 Frames


IBF contained IBF contained
E40 16.92 kW 20.66 kW 24.40 kW w/in 2 frames w/in 2 frames
E56 19.55 kW 23.29 kW 27.00 kW Height (w/ covers) 201.5 cm / 79.3 in 194.1 cm / 76.4 in
E64 19.55 kW 23.29 kW 27.50 kW Width (w/ covers) 156.8 cm / 61.7 in 156.8 cm / 61.7 in
Depth (w/ covers) 180.3 cm / 71.0 in 157.7 cm / 62.1 in

Height Reduction 180.9 cm / 72.1 in 178.5 cm / 70.3 in


Model 1 I/O Cage 2 I/O Cage 3 I/O Cage Width Reduction None None
E12 33.1 kBTU/hr 46.0 kBTU/hr 46.0 kBTU/hr* Machine Area 2.83 sq. meters / 2.49 sq. meters /
30.44 sq. feet 26.78 sq. feet
E26 47.7 kBTU/hr 61.0 kBTU/hr 73.7 kBTU/hr Service Clearance 5.57 sq. meters / 5.45 sq. meters /
E40 58.8 kBTU/hr 72.0 kBTU/hr 84.9 kBTU/hr 60.00 sq. feet 58.69 sq. feet
(IBF contained (IBF contained
E56 67.9 kBTU/hr 81.2 kBTU/hr 93.8 kBTU/hr w/in the frame) w/in the frame)
E64 67.9 kBTU/hr 81.2 kBTU/hr 93.8 kBTU/hr
Note: Model E12 has sufficient Host Channel Adaptor capacity for 58 I/O
cards only

46
Coupling Facility - CF Level of Support

CF Level Function z10 EC z9 EC / z9 BC z890 / z990


15 Increasing the allowable tasks in the CF X X
from 48 to 112
14 CFCC Dispatcher Enhancements X X
13 DB2 Castout Performance X X
12 z990 Compatibility X X
64-bit CFCC Addressability X X
Message Time Ordering X X
DB2 Performance X X
SM Duplexing Support for zSeries X X
11 z990 Compatibility X X
SM Duplexing Support for 9672 G5/G6/R06
10 z900 GA2 Level
9 Intelligent Resource Director X X
IC3 / ICB3 / ISC3 Peer Mode X X
MQSeries® Shared Queues X X
WLM Multi-System Enclaves X X
8 Dynamic ICF Expansion into shared ICF Pool X X
Systems-Managed Rebuild X X
7 Shared ICF partitions on server models X X
DB2 Delete Name Optimization X X
Note: zSeries 900/800 and prior generation servers are not supported with System z10 for Coupling Facility or Parallel Sysplex levels.

47
Publications

New ITSO Redbooks Coupling Facility Channel I/O Interface


Physical Layer SA23-0395
IBM System z10 Technical Introduction SG24-7515
IBM System z10 Technical Guide SG24-7516 ESCON and FICON CTC Reference SB10-7034

IBM System z10 Capacity on Demand SG24-7504 ESCON I/O Interface Physical Layer SA23-0394
FICON I/O Interface Physical Layer SA24-7172
Getting Started with InfiniBand on System z10
and System z9 SG24-7539 Hardware Management Console
Operations Guide (V2.10.0) SC28-6867

IOCP User’s Guide SB10-7037


The following publications are available in the Library section of
Maintenance Information for Fiber Optic Links
Resource Link: SY27-2597
IBM System z10 System Overview SA22-1084
IBM System z10 Parts Catalog GC28-6869
IBM System z10 Installation Manual -
Planning for Fiber Optic Links GA23-0367
Physical Planning (IMPP) GC28-6865
SCSI IPL - Machine Loader Messages SC28-6839
IBM System z10 PR/SM Planning Guide SB10-7153
Service Guide for HMCs and SEs GC28-6861
IBM System z10 Installation Manual GC28-6864
Service Guide for Trusted Key Entry
IBM System z10 Service Guide GC28-6866
Workstations GC28-6862
IBM System z10 Safety Inspection Guide GC28-6870
System z Safety Notices G229-9054 Standalone IOCP User’s Guide SB10-7152
Application Programming Interfaces for Java API-JAVA Support Element Operations Guide
(Version 2.10.0) SC28-6868
Application Programming Interfaces SB10-7030
Capacity on Demand User’s Guide SC28-6871 System z10 Functional Matrix ZSW01335
CHPID Mapping Tool User’s Guide C28-6825 OSA-Express Customer’s Guide SA22-7935
Common Information Model (CIM) OSA-ICC User’s Guide SA22-7990
Management Interfaces SB10-7154

48
Endnote: Copyright IBM Corporation 2008
IBM Corporation
* All statements regarding IBM future direction and intent New Orchard Rd.
are subject to change or withdrawal without notice and Armonk, NY 10504
U.S.A.
represents goals and objectives only.
Produced in the United States of America
02/08
** This is a comparison of the z10 EC 64-way and the z9 All Rights Reserved
EC S54 and is based on LSPR mixed workload average References in this publication to IBM products or services do not imply that
running z/OS 1.8 IBM intends to make them available in every country in which IBM operates.
Consult your local IBM business contact for information on the products,
*** Comparison is versus x86 Blade servers without virtual- features, and services available in your area.
ization, reflecting a current-day consolidation. Reductions IBM, IBM eServer, the IBM logo, the e-business logo, APPN, CICS, DB2,
ECKD, ESCON, FICON, Geographically Dispersed Parallel Sysplex, GDPS,
will vary by the number and age of the x86 servers being
HiperSockets, IMS, Lotus, MQSeries, MVS, OS/390, Parallel Sysplex,
consolidated. PR/SM, Processor Resource/Systems Manager, RACF, Rational, Redbooks,
Resource Link, REXX, RMF, Sysplex Timer, System z, System z9, System
z10, TotalStorage, WebSphere, z9, z10, z/Architecture, z/OS, z/VM, z/VSE,
and zSeries are trademarks or registered trademarks of the International
Business Machines Corporation in the Unites States and other countries.
InfiniBand is a trademark and service mark of the InfiniBand Trade Associa-
tion.
Java and all Java-based trademarks and logos are trademarks or regis-
tered trademarks of Sun Microsystems, Inc. in the United States or other
countries.
Linux is a registered trademark of Linus Torvalds in the United States, other
countries, or both.
UNIX is a registered trademark of The Open Group in the Unites States and
other countries.
Microsoft, Windows and Windows NT are registered trademarks of Micro-
soft Corporation In the United States, other countries, or both.
Intel is a trademark of the Intel Corporation in the United States and other
countries.
Other trademarks and registered trademarks are the properties of their
respective companies.
IBM hardware products are manufactured from new parts, or new and used
parts. Regardless, our warranty terms apply.
Performance is in Internal Throughput Rate (ITR) ratio based on measure-
ments and projections using standard IBM benchmarks in a controlled
environment. The actual throughput that any user will experience will vary
depending upon considerations such as the amount of multiprogramming
in the user’s job stream, the I/O configuration, the storage configuration,
and the workload processed. Therefore, no assurance can be given that
an individual user will achieve throughput improvements equivalent to the
performance ratios stated here.
All performance information was determined in a controlled environment.
Actual results may vary. Performance information is provided “AS IS” and
no warranties or guarantees are expressed or implied by IBM.
Photographs shown are engineering prototypes. Changes may be incorpo-
rated in production models.
This equipment is subject to all applicable FCC rules and will comply with
them upon delivery.
Information concerning non-IBM products was obtained from the suppli-
ers of those products. Questions concerning those products should be
directed to those suppliers.
All customer examples described are presented as illustrations of how
these customers have used IBM products and the results they may have
achieved. Actual environmental costs and performance characteristics may
vary by customer.
Prices subject to change without notice. Contact your IBM representative
or Business Partner for the most current pricing in your geography.

ZSO03018-USEN-00

49

Vous aimerez peut-être aussi