Vous êtes sur la page 1sur 107

IBM z13 Overview and

Related Tidbits
March 17, 2015

Tim Raley
IBM zISV Lab

Original Charts by
Harv Emery
IBM Washington Systems Center

Copyright IBM Corporation 2015


Trademarks
The following are trademarks of the International Business Machines Corporation in the United States, other countries, or both.

Not all common law marks used by IBM are listed on this page. Failure of a mark to appear does not mean that IBM does not use the mark nor does it mean that the product is not
actively marketed or is not significant within its relevant market.
Those trademarks followed by are registered trademarks of IBM in the United States; all others are trademarks or common law marks of IBM in the United States.

For a more complete list of IBM Trademarks, see www.ibm.com/legal/copytrade.shtml:

*BladeCenter, CICS, DataPower, DB2, e business(logo), ESCON, eServer, FICON, GDPS, IBM, IBM (logo), IMS, MVS,
OS/390, POWER6, POWER6+, POWER7, Power Architecture, PowerVM, PureFlex, PureSystems, S/390, ServerProven,
Sysplex Timer, System p, System x, System z, System z9, System z10, Tivoli, WebSphere, X-Architecture, z Systems, z9,
z10, z13, z/Architecture, z/OS, z/VM, z/VSE, zEnterprise, zSeries

The following are trademarks or registered trademarks of other companies.

Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries.
Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom.
Oracle, Java and all Java-based trademarks are trademarks of Oracle Corporation and/or its affiliates in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.
Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of
Intel Corporation or its subsidiaries in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office.
IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency, which is now part of the Office of Government Commerce.
* All other products may be trademarks or registered trademarks of their respective companies.

Notes:
Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will
experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed.
Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here.
IBM hardware products are manufactured Sync new parts, or new and serviceable used parts. Regardless, our warranty terms apply.
All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual
environmental costs and performance characteristics will vary depending on individual customer configurations and conditions.
This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without
notice. Consult your local IBM business contact for information on the product or services available in your area.
All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
Information about non-IBM products is obtained Sync the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance,
compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.
Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography.

Page 2 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Important References

Page 3 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
References

 IBM z13 Technical Introduction


http://publib-b.boulder.ibm.com/redpieces/abstracts/sg248250.html?Open

 IBM z13 Technical Guide


http://publib-b.boulder.ibm.com/redpieces/abstracts/sg248251.html?Open

 IBM z13 Configuration Setup


http://publib-b.boulder.ibm.com/redpieces/abstracts/sg248260.html?Open

 z Systems Simultaneous Multithreading Revolution


http://publib-b.boulder.ibm.com/abstracts/redp5144.html?Open

Page 4 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Statements of
Direction

Page 5 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Statements of Direction
 Removal of support for Classic Style User Interface on the Hardware Management
Console and Support Element: The IBM z13 will be the last z Systems server to support
Classic Style User Interface. In the future, user interface enhancements will be focused on
the Tree Style User Interface.

 The IBM z13 will be the last z Systems server to support FICON Express8 channels: IBM
z13 will be the last high-end server to support FICON Express8. Enterprises should begin
migrating from FICON Express8 channel features (#3325, #3326) to FICON Express16S
channel features (#0418, #0419). FICON Express8 will not be supported on future high-
end z Systems servers as carry forward on an upgrade.

 The IBM z13 will be the last z Systems server to offer ordering of FICON Express8S
channel features. Enterprises that have 2 Gb device connectivity requirements must carry
forward these channels.

 The IBM z13 will be the last generation of z Systems hardware servers to support
configuring OSN CHPID types. OSN CHPIDs are used to communicate between an
operating system instance running in one logical partition and the IBM Communication
Controller for Linux on z Systems (CCL) product in another logical partition on the same
CPC. See announcement letter #914-227 dated 12/02/2014 for details regarding
withdrawal from marketing for the CCL product.

All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on
these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.

Page 6 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Statements of Direction
 Enhanced RACF password encryption algorithm for z/VM: In a future deliverable an
enhanced RACF/VM password encryption algorithm is planned. This support will be
designed to provide improved cryptographic strength using AES-based encryption in
RACF/VM password algorithm processing. This planned design is intended to provide better
protection for encrypted RACF password data in the event that a copy of RACF database
becomes inadvertently accessible.

 IBM intends that a future release of IBM CICS Transaction Server for z/OS will support 64-
bit SDK for z/OS, Java Technology Edition, Version 8 (Java 8). This support will enable the
use of new facilities delivered by IBM z13 which are exploited by Java 8, including Single
Instruction Multiple Data (SIMD) instructions for vector operations and simultaneous
multithreading (SMT).

 z/VM support for Single Instruction Multiple Data (SIMD): In a future deliverable IBM intends
to deliver support to enable z/VM guests to exploit the Vector Facility for z/Architecture
(SIMD).

 Removal of support for Expanded Storage (XSTORE): z/VM V6.3 is the last z/VM release
that will support Expanded Storage (XSTORE) for either host or guest usage. The IBM z13
server family will be the last z Systems server to support Expanded Storage (XSTORE).

All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on
these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.

Page 7 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Statements of Direction
 The IBM z13 will be the last z Systems server to support running an operating system in
ESA/390 architecture mode; all future systems will only support operating systems running
in z/Architecture mode. This applies to operating systems running native on PR/SM as well
as operating systems running as second level guests. IBM operating systems that run in
ESA/390 mode are either no longer in service or only currently available with extended
service contracts, and they will not be usable on systems beyond IBM z13. However, all 24-
bit and 31-bit problem-state application programs originally written to run on the ESA/390
architecture will be unaffected by this change.

 Stabilization of z/VM V6.2 support: The IBM z13 server family is planned to be the last z
Systems server supported by z/VM V6.2 and the last z systems server that will be
supported where z/VM V6.2 is running as a guest (second level). This is in conjunction with
the statement of direction that the IBM z13 server family will be the last to support ESA/390
architecture mode, which z/VM V6.2 requires. z/VM V6.2 will continue to be supported until
December 31, 2016, as announced in announcement letter # 914-012.

 Product Delivery of z/VM on DVD/Electronic only: z/VM V6.3 will be the last release of z/VM
that will be available on tape. Subsequent releases will be available on DVD or
electronically.

All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on
these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.
Page 8 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Statements of Direction
 Removal of support for the Hardware Management Console Common Infrastructure Model
(CIM) Management Interface: IBM z13 will be the last z Systems server to support the
Hardware Console Common Infrastructure module (CIM) Management Interface. The
Hardware Management Console Simple Network Management Protocol (SNMP), and Web
Services Application Programming Interfaces (APIs) will continue to be supported.

 IBM intends to provide support for the Read Diagnostic Parameters Extended Link Service
command for fiber channel SANs as defined in the T11.org FC-LS-3 draft standard.
Support for the Read Diagnostic Parameters Extended Link Service command is intended
to improve SAN reliability and fault isolation.

 Removal of an option for the way shared logical processors are managed under PR/SM
LPAR: The IBM z13 will be the last high-end server to support selection of the option to "Do
not end the timeslice if a partition enters a wait state" when the option to set a processor
run time value has been previously selected in the CPC RESET profile. The CPC RESET
profile applies to all shared logical partitions on the machine, and is not selectable by
logical partition.

All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on
these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.

Page 9 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Statements of Direction
 IBM plans to accept for review certification requests from cryptography providers by the
end of 2015, and intends to support the use of cryptography algorithms and equipment
from providers meeting IBM's certification requirements in conjunction with z/OS and z
Systems processors in specific countries. This is expected to make it easier for
customers to meet the cryptography requirements of local governments.
 KVM offering for IBM z Systems: In addition to the continued investment in z/VM, IBM
intends to support a Kernel-based Virtual Machine (KVM) offering for z Systems that will
host Linux on z Systems guest virtual machines. The KVM offering will be software that
can be installed on z Systems processors like an operating system and can co-exist with
z/VM virtualization environments, z/OS, Linux on z Systems, z/VSE and z/TPF. The KVM
offering will be optimized for z Systems architecture and will provide standard Linux and
KVM interfaces for operational control of the environment, as well as providing the
required technical enablement for OpenStack for virtualization management, allowing
enterprises to easily integrate Linux servers into their existing infrastructure and cloud
offerings.
 In the first half of 2015, IBM intends to deliver a GDPS/Peer to Peer Remote Copy
(GDPS/PPRC) multiplatform resiliency capability for customers who do not run the z/OS
operating system in their environment. This solution is intended to provide IBM z
Systems customers who run z/VM and their associated guests, for instance, Linux on z
Systems, with similar high availability and disaster recovery benefits to those who run on
z/OS. This solution will be applicable for any IBM z Systems announced after and
including the zBC12 and zEC12
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on
these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.
Page 10 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
IBM z13 Launch

Page 11 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
IBM z13 platform positioning

Platform Core Capabilities:

Transaction Processing The worlds premier transaction


and data engine now enabled for
Data Serving
the mobile generation
Mixed Workloads
Operational Efficiency The integrated transaction and
analytics system for right-time
Trusted and Secure Computing
insights at the point of impact
Reliable, Available, Resilient

Virtually Limitless Scale The worlds most efficient and


trusted cloud system that
transforms the economics of IT

Page 12 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
IBM z13: Advanced system design optimized for digital business
System I/O Bandwidth
832 GB/Sec*

384 GB/Sec*

288 GB/sec*

172.8 GB/sec*

PCI for
Memory
10 TB 3 1.5 512 600 902 1202 1514 1-way
TB TB GB 1695

54-way z13
64-way
zEC12
80-way
* No server can fully exploit its z196
maximum I/O bandwidth 101-way
z10 EC
PCI Processor Capacity
Index (IBM MIPS) z9 EC
141-way
Customer Processors
Page 13 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
IBM z13 and zBX Model 004
IBM z13 (2964) IBM zBX Model 4 (2458-004)

Available March 9, 2015 Available March 9, 2015


5 models NE1, NC9, N96, N63, N30 Upgrade ONLY stand alone Ensemble node converted
Up to 141 customer configurable engines from an installed zBX Model 2 or 3
Sub-capacity Offerings for up to 30 CPs Doesnt require a owning CPC
PU (Engine) Characterization Management Unified Resource Manager
CP, IFL, ICF, zIIP, SAP, IFP (No zAAPs) zBX Racks (up to 4) with:
SIMD instructions, SMT for IFL and zIIP Dual 1U Support Elements, Dual INMN and IEDN TOR
On Demand Capabilities switches in the 1st rack
CoD: CIU, CBU, On/Off CoD, CPE HMC LAN attached (no CPC BPH attachment)
Memory up to 10 TB 2 or 4 PDUs per rack
Up to 10 TB per LPAR (if no FICON Express8) Up to 8 BladeCenter H Chassis
96 GB Fixed HSA Space for 14 blades each
Channels 10 GbE and 8 Gbps FC connectivity
PCIe Gen3 16 GBps channel buses Advanced Management Modules
Six CSSs, up to 85 LPARs Redundant connectivity, power, and cooling
4 Subchannel Sets per CSS Up to 112 single wide IBM blades
FICON Express16S or 8S (8 Carry forward) IBM BladeCenter PS701 Express
OSA Express5S (4S carry forward) IBM BladeCenter HX5 7873
HiperSockets up to 32 IBM WebSphere DataPower Integration Appliance XI50
Flash Express for zEnterprise (M/T 2462-4BX)
zEnterprise Data Compression IBM WebSphere DataPower Integration Appliance
RDMA over CE (RoCE) with SR-IOV Support XI50z with Firmware 7.0
Crypto Express5S Operating Systems
Parallel Sysplex clustering, PCIe Coupling, AIX 5.3 and higher
Internal Coupling, and InfiniBand Coupling Linux on System x
IBM zAware: z/OS and Linux on z Systems Microsoft Windows Server on System x
Operating Systems Hypervisors
z/OS, z/VM, z/VSE, z/TPF, Linux on z Systems KVM Hypervisor on System x
PowerVM Enterprise Edition

Page 14 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
IBM z13 Key Planning and Support Dates
 January 14, 2015 Announcement Day
Available to IBMers:
First day order submission
Capacity Planning Tool (GA Level) for z13 and zBX Model 4
zPCR, zTPM, zCP3000, zBNA, zSoftCap, zTPM, zSCON
Note: Customer zPCR Version C8.7A January 30, 2015
SAPR Guide and SA Confirmation Checklists for z13 and zBX Model 4

Available to Clients and IBMers:


Resource Link Support for z13 and zBX Model 4:
Essential planning publications, Tools, Driver Exception Letter, Education, .
ITSO Redbooks (Draft Versions):
IBM z13 Technical Introduction, SG24-8250
IBM z13 Technical Guide, SG24-8251
IBM z13 Configuration Setup, SG24-8260
IBM z Systems Connectivity Handbook, SG24-5444
IBM z Systems Functional Matrix, REDP-5157
Real-time Fraud Detection Analytics on z Systems, SG24-8066
PSP Buckets and PTFs for z13 Features and Functions available at GA

 February 27, 2015


Available to Clients and IBMers: CFSizer Tool

March 9, 2014 General Availability


Available to Clients and IBMers: Remaining product publications

Page 15 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
IBM z13 Availability Dates (1 of 2)
 March 9, 2015 General Availability
Features and functions for the z13 with GA Driver 22
z13 Models N30, N63, N96, NC9, and NE1
z196 and zEC12 air-cooled EC upgrades to z13 air-cooled or z13 water cooled
z196 and zEC12 water-cooled EC upgrades to z13 water-cooled
z196 with zBX Model 002 upgrades to z13 and zBX Model 004 standalone
zEC12 with zBX Model 003 upgrades to z13 and zBX Model 004 standalone
zBX Model 002 and zBX Model 003 upgrades to zBX Model 004 (#0512) standalone
Field installed features and conversions on z13 that are delivered solely through a modification to the
machine's Licensed Internal Code (LIC)
Limited options to increase or decrease IBM BladeCenter HX5 blade server or IBM BladeCenter PS701
blade server entitlements on zBX upgrades to Model 004 standalone

 March 13, 2015


z/VM V6.3 exploitation support for Simultaneous multithreading (SMT)

 April 14, 2015


TKE 8.0 LIC (#0877) on zEC12 and ZBC12
TKE Workstation (#0847) on zEC12 and zBC12
TKE Smart Card Reader (#0891) on zEC12 and zBC12
TKE additional smart cards (#0892) on zEC12 and zBC12
4767 TKE Crypto Adapter (#0894) on zEC12 and zBC12
Fill and Drain Kit (#3380) for zEC12
Fill and Drain adapter kit (#3379) for zEC12
Universal Lift Tool/Ladder (#3105) for zEC12 and zBC12
Universal Lift Tool upgrade kit (#3103) for zEC12 and zBC12
Page 16 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
IBM z13 Availability Dates (2 of 2)
 May 30, 2015
Limited MES features for zBX Model 004 standalone

 June 26, 2015


Field install of MES hardware features for z13 Models N30, N63, N96, NC9, and NE1
z/VM V6.3 support for Multi-VSwitch Link Aggregation
Support for 256 Coupling CHPIDs
HMC STP Panel Enhancements: Initialize Time, Set Date and Time, Time Zone, View-Only Mode
Fibre Channel Protocol (FCP) channel configuration discovery and debug
Improved High Performance FICON for z Systems (zHPF) I/O Execution at Distance
IBM zAware support for Linux on z Systems

 September 25, 2015


FICON Dynamic Routing
Forward Error Correction (FEC) for FICON Express16S
Storage Area Network (SAN) Fabric I/O Priority

Page 17 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
IBM z13 Hardware Package

Page 18 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Model NE1 or NC9 Radiator (Air) Cooled Under the covers
(Front View)
Space for Optional
Integrated Battery Two 1U Support
Features (IBFs) Element (SE)
System Units
Power Components
Last PCIe I/O
drawer (5th)

Processor Drawers
(1st bottom to 4th top)
with Flexible Support
Processors (FSPs),
and I/O fanouts
Space for the first
four I/O drawers. N+2 Pumps and
The top two can be Blowers for
8-slot for carried Radiator Air
forward FICON Cooling Unit
Express8. All can be
PCIe I/O drawers 2 SE Displays
with Keyboards

Page 19 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 CPC Drawer (Top View)
Node 0 Front Node 1
 Two physical nodes, left and right
 Each logical node:
Three PU chips
One SC chip (480 MB L4 cache)
Three Memory Controllers:
One per CP Chip
Five DDR3 DIMM slots per Memory
Controller: 15 total per node
(One bank of 5 never used in Node 1)
 Each drawer:
SC SC
Six PU Chips with water cooling for
39 active PUs (42 in Model NE1)
Two SC Chips (960 MB L4 cache)
with heat sink for air cooling
Populated DIMM slots: 20 or 25 DIMMs
to support up to 2,560 GB of
addressable memory (3,200 GB RAIM)
Two Flexible Support Processors
Ten fanout slots for PCIe I/O drawer
fanouts or PCIe coupling fanouts
Rear Four fanout slots for IFB I/O drawer
fanouts or PSIFB coupling link fanouts

Page 20 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
IBM z13 Model Structure
and Performance

Page 21 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 System Offering Overview Machine Type for z13
2964
Processors
z13
39 PUs per drawer (42 in NE1)
Sub-capacity available up to 30 CPs
NE1
2 standard spare PUs per system
(141 way)
Memory
System minimum = 64 GB with
NC9 separate 96 GB HSA
zEC12 zBX Model 3
(129 way) Maximum: ~10 TB / ~2.5TB per drawer
RAIM memory design
N96 Purchase Increments 32 to 512 GB

Concurrent Upgrade
I/O
(96 way)
Up to 14 fanouts per drawer
Up to 10 PCIe Gen 3 fanouts: 1-port 16
N63 GBps I/O or
(63 way) 2-port 8 GBps PCIe coupling
Up to 4 IFB HCA fanouts: 2-port 6 GBps
I/O, 2-port 12x PSIFB, or 4-port 1x PSIFB
N30
On upgrade from zEC12 or z196
(30 way)
Detach zBX Model 3 or 2 and upgrade it to
zBX Model 4 (Option: Move zBX Model 3)
Feature convert installed zAAPs to zIIPs
(default) or another processor type
z196 zBX Model 2 For installed On Demand Records, change
temporary zAAPs to zIIPs. Stage the record
Page 22 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Continues the CMOS Mainframe Heritage Begun in 1994
5.5 GHz
6000 5.2 GHz
5.0 GHz
4.4 GHz
5000

1514*
4000 1202* +26%
+33% GHz 1695*
GHz +6% +12%
902*
MHz/GHz

+18% GHz
3000 +50% -9%
1.7 GHz GHz
+159%
2000 1.2 GHz
770 MHz Uniprocessor Single Thread PCI
Improvements and GHz Increases
1000
*Capacity and performance ratios are based on measurements and projections using standard IBM benchmarks in a controlled
environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount
of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload .
0

2000 2003 2005 2008 2010 2012 2015


z900
z900 z990
z990 z9ec
z9 EC z10ec
z10 EC z196
z196 zEC12
zEC12 zNextz13
189 nm SOI 130 nm SOI 90 nm SOI 65 nm SOI 45 nm SOI 32 nm SOI 22 nm SOI
16 Cores**
Full 64-bit
32 Cores**
Superscalar
54 Cores**
System level
64 Cores**
High-freq core
80 Cores**
OOO core
101 Cores**
OOO and eDRAM
EC
141 Cores**
SMT &SIMD
z/Architecture Modular SMP scaling 3-level cache eDRAM cache cache
RAIM memory improvements Up to 10TB of
zBX integration PCIe Flash Memory
Arch extensions
for scaling
* PCI (MIPS) Tables are NOT adequate for making comparisons of z Systems processors. Use IBM Capacity Planning Tools!
** Number of PU cores for customer use

Page 23 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Full and Sub-Capacity CP Offerings
CP Capacity* (Measured with z/OS V2.1)  Up to 30 subcapacity CPs may be ordered on ANY z13 model.
Relative to Full Capacity Uniprocessor If 31 or more CPs are ordered all must be full 7xx capacity
701 = 100% = 1695 PCI (IBM MIPS)  232 CP capacity settings including 400 for no CPs
601 63% = 1068 PCI 7xx  Specialty engines run at full capacity, SMT optional: IFL and zIIP
501 44% = 746 PCI
401 15% = 250 PCI  SMT average capacity* benefit: 25% for zIIP, 20% for IFL
 z13 NE1 maximum CP capacity is 1.4 times the zEC12 HA1
6xx  IFL PVU rating remains at 120, the same as zEC12,
z196 and z10 EC, no increase for SMT
5xx  Entitlement to purchase two zIIPs
for each CP purchased.

4xx

MSU Sub Capacity

*Capacity and performance ratios are based on measurements and


projections using standard IBM benchmarks in a controlled environment.
Actual throughput that any user will experience will vary depending upon
considerations such as the amount of multiprogramming in the user's job
stream, the I/O configuration, the storage configuration, and the workload.

N30 N63 N96 NC9 NE1


Page 24 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 z/Architecture Extensions
 Two-way simultaneous multithreading (SMT) operation
Up to two active execution threads per core can dynamically share the caches, TLBs and
execution resources of each IFL and zIIP core. Changes made to support SMT are designed to
improve both core capacity in SMT mode and single thread performance.
PR/SM dispatches online logical processors to physical cores; but, an operating system with
SMT support can be configured to dispatch work to a thread on an IFL or zIIP core in single
thread or SMT mode so that HiperDispatch cache optimization is considered. (Zero, one or two
threads can be active in SMT mode). Enhanced CPU Measurement Facility monitoring support
will measure thread usage, capacity and performance.
 Core micro-architecture radically altered to increase parallelism
New branch prediction and instruction fetch front end to support SMT and to improve branch
prediction throughput.
Wider instruction decode, dispatch and completion bandwidth:
Increased to six instructions per cycle compared to three on zEC12
Larger instruction issue bandwidth: Increased to up to 10 instructions issued per cycle (2
branch, 4 FXU, 2 LSU, 2 BFU/DFU/SIMD) compared to 7 on zEC12
Greater integer execution bandwidth: Four FXU execution units
Greater floating point execution bandwidth: Two BFUs and two DFUs; improved fixed point and
floating point divide
 Single Instruction Multiple Data (SIMD) instruction set and execution: Business
Analytics Vector Processing
Data types: Integer: byte to quad-word; String: 8, 16, 32 bit; binary floating point
New instructions (139) include string operations, vector integer and vector floating point
operations: two 64-bit, four 32-bit, eight 16-bit and sixteen 8-bit operations.
Floating Point Instructions operate on newly architected vector registers (32 new 128-bit
registers). Existing FPRs overlay these vector registers.

Page 25 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Simultaneous Multithreading (SMT)
 Simultaneous multithreading allows instructions from one or two
threads to execute on a zIIP or IFL processor core.
 SMT helps to address memory latency, resulting in an overall
capacity* (throughput) improvement per core
 Capacity improvement is variable depending on workload. For
AVERAGE workloads the estimated capacity* of a z13:
zIIP is 40% greater than a zEC12 zIIP
IFL is 32% greater than a zEC12 IFL
zIIP is 72% greater than a z196 zIIP
IFL is 65% greater than a z196 IFL
80 50
 SMT exploitation: z/VM V6.3 + PTFs for IFLs and
z/OS V2.1 + PTFs in an LPAR for zIIPs
 SMT can be turned on or off on an LPAR by LPAR basis by
operating system parameters. z/OS can also do this
dynamically with operator commands.
 Notes:
1. SMT is designed to deliver better overall capacity (throughput) for
many workloads. Thread performance (instruction execution rate for
an individual thread) may be faster running in single thread mode.
2. Because SMT is not available for CPs, LSPR ratings do not include it
Which approach is designed for
the highest volume** of traffic?
*Capacity and performance ratios are based on measurements and projections using standard Which road is faster?
IBM benchmarks in a controlled environment. Actual throughput that any user will experience
will vary depending upon considerations such as the amount of multiprogramming in the user's **Two lanes at 50 carry 25% more volume if
job stream, the I/O configuration, the storage configuration, and the workload . traffic density per lane is equal

Page 26 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
SIMD (Single Instruction Multiple Data) Instructions
Increased parallelism to enable analytics processing
 Fewer instructions helps to improve execution efficiency
Value
 Process elements in parallel enabling more iterations  Enable new applications
 Offload CPU
 Supports analytics, compression, cryptography,  Simplify coding
video/imaging processing

Scalar SIMD
SINGLE INSTRUCTION, SINGLE DATA SINGLE INSTRUCTION, MULTIPLE DATA

A3 B3 C3
A3 B3
C3
A2 B2 C2 INSTRUCTION A2 B2
C2
A1 B1
C1 C1
A1 B1
Sum and Store
Sum and Store

Instruction is performed for Perform instructions on


every data element every element at once

Page 27 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
IBM z13 Processors

Page 28 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 8-Core Processor Chip Detail
 Up to eight active cores (PUs) per chip
5.0 GHz (v5.5 GHz zEC12)
L1 cache/ core
96 KB I-cache
128 KB D-cache
L2 cache/ core
2M+2M Byte eDRAM split private L2 cache
 Single Instruction/Multiple Data (SIMD)
 Single thread or 2-way simultaneous
multithreading (SMT) operation
 Improved instruction execution bandwidth:
Greatly improved branch prediction and
instruction fetch to support SMT
Instruction decode, dispatch, complete
increased to 6 instructions per cycle
Issue up to 10 instructions per cycle
Integer and floating point execution units
 On chip 64 MB eDRAM L3 Cache
Shared by all cores
 14S0 22nm SOI  Chip Area  I/O buses
Technology 678.8 mm2 One GX++ I/O bus
17 layers of metal Two PCIe I/O buses
28.4 x 23.9 mm
 Memory Controller (MCU)
3.99 Billion Transistors 17,773 power pins
Interface to controller on memory DIMMs
13.7 Miles copper wire 1,603 signal I/Os
Supports RAIM design

Page 29 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Storage Control (SC) Chip Detail

 CMOS 14S0 22nm SOI Technology


15 Layers of metal
7.1 Billion transistors
12.4 Miles of copper wire
 Chip Area
28.4 x 23.9 mm
678 mm2
11,950 power pins
1,707 Signal Connectors
 eDRAM Shared L4 Cache
480 MB per SC chip (Non-inclusive)
224 MB L3 NIC Directory
2 SCs = 960 MB L4 per z13 drawer
 Interconnects (L4 L4)
3 to CPs in node
1 to SC (node node) in drawer
3 to SC nodes in remote drawers
 6 Clock domains

Page 30 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 SCM Vs zEC12 MCM Comparison
z13 Single Chip Modules (SCMs) zEC12 Multi Chip Module (MCM)
 Processer Unit (PU) SCM  Technology
68.5mm x 68.5mm fully assembled 96mm x 96mm with 102 glass ceramic layers
PU Chip area 678 mm2 7,356 LGA connections to 8 chip sites
Eight core chip with 6, 7 or 8 active cores  Six 6-core Processor (PU) chips
 Storage Control (SC) SCM Each with 4, 5 or 6 active cores
68.5mm x 68.5mm fully assembled
27 active processors per MCM (30 in Model HA1)
SC Chip area 678 mm2
480 MB on-inclusive L4 cache per SCM PU Chip size 23.7 mm x 25.2 mm
Non-Data Integrated Coherent (NIC) Directory for L3  Two Storage Control (SC) chips per MCM
 Processor Drawer Two Nodes 192 MB Inclusive L4 cache per SC, 384 MB per MCM
Six PU SCMs for 39 PUs (42 PUs in Model NE1) SC Chip size 26.72 mm x 19.67 mm
Two SC SCMs (960 MB L4)  One MCM per book, up to 4 books per System
N30: One Drawer, N63: Two Drawers,
N96: Three Drawers, NC9 or NE1: Four Drawers

PU Chip SC Chip PU 2 PU 1 PU 0

without without Heat Sink or


Thermal Cap Thermal Cap
V10 V00
SC 1 SC 0
V11 V01

PU 3 PU 4 PU 5

Page 31 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Logical Drawer Structure and Interconnect
Mem

Node 1 Node 0

PU PU PU PU

PU PU

To other Front To other


drawers drawers
Physical node: (Two per drawer)
 Chips
Three PU chips
One SC chip (480 MB L4 cache + 224 MB NIC Directory)
 RAIM Memory
Three Memory Controllers: One per CP Chip
Five DDR3 DIMM slots per Controller: 15 total per logical node
Populated DIMM slots: 20 or 25 per drawer
 SC and CP Chip Interconnects
X-bus: SC and CPs to each other (same node)
S-bus: SC to SC chip in the same drawer
A-bus: SC to SC chips in the remote drawers

Page 32 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
zEC12 Book (Left) to z13 Node (Right) Cache Comparison

480MB Non-Inclusive, 30w Sets

zEC12 z13
Page 33 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Node L4 Cache Design with Non-Data Inclusive Coherent (NIC)
Directory, Intra-Node Snoop Interface and Inter-Node Snoop Interface

zEC12 Inclusive L4 Design z13 Non-Inclusive L4 Design


192 MB + 192 MB per Book 480 MB L4 with 224 MB NIC Directory
(Two nodes per drawer)
24w L4 Cache 30w L4 Cache 14w NIC

L4 Cache L4 Cache NIC


L3 owned and Previously owned and L3 owned Lines
Previously owned lines some L3 owned lines

6 L3s 3 L3s

L3 locally owned lines can be accessed over the X-bus L3 L3 using the Intra-Node Snoop Interface
without being included in L4. Inter-Node snoop traffic to L4 can still be handled effectively.

Page 34 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Processor Features zIIP to CP 2:1 ratio
Drawers IFLs Std Optional Std.
Model CPs zIIPs ICFs IFP
/PUs uIFLs SAPs SAPs Spares

0-30
N30 1/39 0-30 0-20 0-30 6 0-4 2 1
0-29
0-63
N63 2/78 0-63 0-42 0-63 12 0-8 2 1
0-62
0-96
N96 3/117 0-96 0-64 0-96 18 0-12 2 1
0-95
0-129
NC9 4/156 0-129 0-86 0-129 24 0-16 2 1
0-128
0-141
NE1 4/168 0-141 0-94 0-141 24 0-16 2 1
0-140

z13 Models N30 to NC9 use drawers with 39 cores. The Model NE1 has 4 drawers with 42 cores.
The maximum number of logical ICFs or logical CPs supported in a CF logical partition is 16
The integrated firmware processor (IFP) is used for PCIe I/O support functions
Concurrent Drawer Add (CDA) is available to upgrade in steps from model N30 to model NC9
1. At least one CP, IFL, or ICF must be purchased in every machine
2. Two zIIPs may be purchased for each CP purchased if PUs are available. This remains true for sub-capacity CPs and for banked CPs.
3. On an upgrade from z196 or zEC12, installed zAAPs are converted to zIIPs by default. (Option: Convert to another engine type)
4. uIFL stands for Unassigned IFL
5. The IFP is conceptually an additional, special purpose SAP
Page 35 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Processor Unit (Core) Locations: Customer, SAP, IFP and Spare
z13 1st Drawer 2nd Drawer 3rd Drawer 4th Drawer
Cust
Cust Cust Cust Cust
Model PUs PUs
SAPs IFP Spare PUs SAPs IFP Spare
PUs
SAPs IFP Spare
PUs
SAPs IFP Spare

NE1 141 34 6 1 1 35 6 0 1 36 6 0 0 36 6 0 0

NC9 129 31 6 1 1 32 6 0 1 33 6 0 0 33 6 0 0

N96 96 31 6 1 1 32 6 0 1 33 6 0 0

N63 63 31 6 1 1 32 6 0 1

N30 30 30 6 1 2

 PUs can be configured as CPs, IFLs, Unassigned IFLs, zIIPs, ICFs or Additional SAPs
 zAAPs discontinued as per SOD
 zIIP to CP purchase ratio is 2:1
 Additional SAPs + Permanent SAPs may not exceed 32
 Any unconfigured PU can act as an additional Spare PU

 Upgrades available between any models


 Achieved via concurrent drawer add from model N30 to model NC9
 Achieved via combination of drawer add and drawer replacement to model NE1

Page 36 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
IBM z13 Memory

Page 37 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 5-Channel RAIM Memory Controller Overview
(RAIM = Redundant Array of Independent Memory)
Ch4 RAIM
Layers of Memory Recovery
Ch3
Ch2 ECC
DIMM DRAM  Powerful 90B/64B Reed Solomon code
Ch1 DRAM Failure
Ch0  Marking technology; no half sparing
needed
C
X  2 DRAM can be marked
MCU0 R
 Call for replacement on third DRAM
C
Lane Failure
 CRC with Retry
CLK X  Data lane sparing
Diff
ASIC X  CLK RAIM with lane sparing
DIMM Failure (discrete components,
VTT Reg.)
 CRC with Retry
C  Data lane sparing
R  CLK RAIM with lane sparing
C
DIMM Controller ASIC Failure
 RAIM Recovery
CLK Channel Failure
Diff  RAIM Recovery

z13: Each memory channel supports only one DIMM

Page 38 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Purchasable Addressable Memory Ranges

 Purchased Memory - Memory available for


Standard Flexible assignment to LPARs
Model Memory Memory
GB GB  Hardware System Area Standard 96 GB of
addressable memory for system use outside
customer memory
N30 64 - 2464 NA
 Standard Memory - Provides minimum
physical memory required to hold customer
purchase memory plus 96 GB HSA
N63 64 - 5024 64 - 2464
 Flexible Memory - Provides additional
physical memory needed to support
N96 64 - 7584 64 - 5024 activation base customer memory and HSA
on a multiple drawer z13 with one drawer out
of service.
NC9 64 - 10144 64 - 7584
 Plan Ahead Memory Provides additional
physical memory needed for a concurrent
upgrade (LIC CC change only) to a
NE1 64 - 10144 64 - 7584 preplanned target customer memory

Page 39 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Standard and Flexible Addressable Memory Offerings

Memory
Offered Memory Sizes (GB) Memory Maximum Notes (GB)
Increment (GB)

32 64, 96, 128, 160,192


64 256, 320, 384, 448
96 544, 640, 736, 832, 928
128 1056, 1184, 1312, 1440
1696, 1952, 2208, 2464, 2720, 2464 N30 Standard, N63 Flexible
2976, 3232, 3488, 3744, 4000,
256
4256, 4512, 4768, 5024, 5280, 5024 N63 Standard, N96 Flexible
5536, 5792, 6048
6560, 7072, 7584, 8096, 8608, 7584 N96 Standard, NC9 and NE1 Flexible
512
9120, 9632, 10144 10144 NC9 and NE1 Standard

Page 40 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Memory DIMMs and Plugging
 z13 Memory Plugging
Six memory controllers per drawer, one per PU chip, three per node
Each memory controller supports five DIMM slots
Four or five memory controllers per drawer will be populated (20 or 25 DIMMs)
Different memory controllers may have different size DIMMs
 Maximum Client Memory Available
Remember RAIM 20% of DIMM memory is used only for error recovery
Minimum memory per drawer: 320 GB RAIM = 256 GB addressable
Maximum memory per drawer: 3200 GB RAIM = 2560 GB addressable
To determine maximum possible customer memory from the DIMM configuration:
Calculate addressable memory, subtract 96 GB, and round down if necessary
to an offered memory size

DIMM z13 Feature


Size (5 DIMMs) RAIM and Addressable Size
16 GB #1610 = 80 GB RAIM, 64 GB Addressable Memory
32 GB #1611 = 160 GB RAIM, 128 GB Addressable Memory
64 GB #1612 = 320 GB RAIM, 256 GB Addressable Memory
128 GB #1613 = 640 GB RAIM, 512 GB Addressable Memory

Page 41 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
IBM z13
I/O Subsystem
Introduction

Page 42 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z Systems I/O Subsystem Internal Bus Interconnect Speeds (GBps)

PCIe Gen3
z13 16 GBps
PCIe Gen2
zEC12/zBC12/
z196/z114
8 GBps
InfiniBand
z10/z196/z114/
zEC12/zBC12
6 GBps
STI
z9 2.7 GBps
STI
z990/z890 2 GBps
STI: Self-Timed Interconnect
Page 43 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Processor Drawer Connectivity for I/O and Coupling
Ten PCIe fanout slots per drawer (40 maximum)
ICA (PCIe-O SR) two-port 8 GBps PCIe Gen3 fanout 150
FSP meter fiber optic coupling link

PCIe PCIe
ICA
PCIe Gen3 one-port 16 GBps PCIe fanout connects to a
switch card for an 8-slot PCIe I/O domain (Plugs in pairs)

PCIe Gen3 16x


FSP

Four IFB HCA (GX++) fanout slots per drawer


Four Fanout slots (16 maximum on a four drawer system)
HCA2-C 2-port 6 GBps I/O drawer fanout (plugs in pairs)
HCA3-O 2-port 12x IFB Coupling Link fanout
Ten PCIe Fanout slots HCA3-O LR 4-port 1x IFB Coupling Link fanout
FSP = Flexible Support Processor
IFB 12x IFB 12x
HCA3-O

IFB 12x IFB 12x IFB 1x IFB 1x IFB 1x IFB 1x


HCA2-C HCA3-O LR
Carry forward Carry forward or
(One pair only) New Build

Page 44 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
CPC Drawer I/O Fanout and Flexible Support Processor Locations
F P P P P P P P P P P F
S C C C C C LG07 LG08 LG09 LG10 C C C C C S
P I I I I I I I I I I P
e e e e e e e e e e 4U
I I I I
F F F F
B B B B
LG01 LG02 LG03 LG04 LG05 LG06 LG11 LG12 LG13 LG14 LG15 LG16
SMP-J01 SMP-J02 SMP-J03 SMP-J04 SMP-J05 SMP-J06

 PCIe Fanout Slots (Ten), slots LG02 LG06 and LG11 LG15, can support:
Up to 10 one-port PCIe 16 GBps I/O fanouts to support up to 10 domains in 32-slot PCIe I/O drawers
Note: A zEC12 book with eight two-port 8 GBps PCIe fanouts supports up to 16 domains in 32-slot PCIe I/O drawers;
but a z13 CPC drawer supports double the bandwidth to each domain
Up to 10 ICA (PCIe-SR) two-port coupling fanouts to support up to 20 8 GBps coupling links
 IFB Fanout Slots (Four), LG07 LG10, can support:
Up to four HCA3-O 12x InfiniBand coupling fanouts, 8 12x 6 GBps links Two per fanout
Up to four HCA3-O LR 1x InfiniBand coupling fanouts 16 1x 5 Gbps links Four per fanout
Note: A zEC12 book with 8 two-port HCA3-O 12x InfiniBand coupling fanouts can support 16 12x links
A zEC12 book with 8 four-port HCA3-O LR 1x InfiniBand coupling fanouts can support 32 1x links
Up to two two-port HCA2-C 6GBps I/O fanouts (2 8-slot I/O drawers) with two slots left
 Slots LG01 and LG16 always have Flexible Support Processors (FSPs)
 SMP-J01 to J06 connectors are for A-Bus cables to nodes in other CPC drawers

Page 45 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
PCIe 32 I/O slot drawer
Front
 Supports only PCIe I/O cards
z13: Up to five drawers
zEC12: Up to five drawers
 Supports 32 PCIe I/O cards, 16 front and 16
  rear, vertical orientation, in four 8-card
7U
domains (shown as 0 to 3).
 Requires four 16 GBps PCIe switch cards
Domain 0 Domain 2
(), each connected to a 16 GBps PCIe I/O
interconnect to activate all four domains.
 To support Redundant I/O Interconnect (RII)
Rear between front to back domain pairs 0-1 and
2-3 the two interconnects to each pair will
be from 2 different PCIe fanouts. (All four
  domains in one of these cages can be
activated with two fanouts.)
7U  Concurrent field install and repair.
Domain 3 Domain 1
 Requires 7 EIA Units of space
(12.25 inches 311 mm)

Page 46 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Supported
I/O Features

Page 47 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 New Build I/O and MES Features Supported
Note - Plan Ahead for I/O drawers is not offered on z13
New Build Features
 Features PCIe I/O drawer
PCIe I/O drawer
FICON Express16S (SX and LX, 2 SFPs, 2 CHPIDs)
FICON Express8S (SX and LX, 2 SFPs, 2 CHPIDs)
OSA-Express5S
10 GbE LR and SR (1 SFP, 1 CHPID)
GbE SX, LX, and 1000BASE-T (2 SFPs, 1 CHPID)
10 GbE RoCE Express (2 supported SR ports)
32 I/O slots
zEDC Express
Crypto Express5S
Flash Express (Technology Refresh)

 PCIe Coupling Link Feature (Fanout)


ICA PCIe-O SR two 8GBps PCIe Gen3 Coupling Link
 InfiniBand Coupling Features (Fanouts)
HCA3-O two 12x 6GBps InfiniBand DDR Coupling Links
HCA3-O LR four 1x 5Gbps InfiniBand DDR or SDR Coupling Links

Page 48 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Carry Forward I/O Features Supported
Note Plan Ahead for I/O drawers is not offered on z13
Carry Forward Features
 Features PCIe I/O drawer PCIe I/O drawer 32 I/O slots
FICON Express8S (SX and LX, 2 SFPs, 2 CHPIDs)
OSA-Express5S (All)
OSA-Express4S (All)
10 GbE RoCE Express (Both ports supported on z13)
zEDC Express
Flash Express
Not Supported: Crypto Express4S

I/O drawer 8 I/O slots


 Features I/O drawer (No MES adds)
FICON Express8 (SX and LX, 4 SFPs, 4 CHPIDs)
SoD: IBM plans not to support FICON Express8 on the next high end z Systems server.
Not Supported: ESCON, FICON Express4, OSA-Express3,
ISC-3, and Crypto Express3

 InfiniBand Coupling Features (Fanouts)


HCA3-O two 12x 6GBps InfiniBand DDR Coupling Links
HCA3-O LR four 1x 5Gbps InfiniBand DDR or SDR Coupling Links
NOT Supported: HCA2-O 12x, HCA2-O LR 1x InfiniBand Coupling Links

Page 49 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Carry Forward (Field Upgrade) Rules for I/O Features
(All PCIe I/O Features Can be Carried Forward)

FICON 8-slot I/O Maximum


Express8 Drawers PCIe
Features Required Drawers/
Carried Slots
Forward (CF or Add)
0 0 5/160
1 to 8 1 4/128
9 to 16 2 3/96
17 or more Not Supported!

Empty slots in a carried forward drawer can NOT be filled by MES.


SoD: IBM plans not to support FICON Express8 on the next high end z Systems server.

Note: Large I/O configurations may require two or more CPC drawers.

Page 50 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 CPC Drawer and I/O Drawer Locations
FRAME Z A
42 SE Server
41 IBF SE Server
40  Drawer locations are based on the front view of the
39 IBF IBF
38 machine: Frame A (right), Frame Z (left) and
37
36
EIA Unit location of the lower left of drawer corner
35 BPA I/O Drawer 5
34 Location A32A  Locations are reported in eConfig AO Data reports
33
32
along with PCHIDs for I/O definition
31 HUB
30  CPC Drawers are populated from bottom to top
29 CPC Drawer 4
28 Location A27A Drawer 1: A15A N30, N63, N96, NC9 and NE1
27
26 Drawer 2: A19A N63, N96, NC9 and NE1
25 I/O Drawer 1 CPC Drawer 3
24 Location Z22B Location A23A Drawer 3: A23A N96, NC9 and NE1
23
22 Drawer 4: A27A NC9 and NE1
21 CPC Drawer 2
20
19
Location A19A  Old technology 8-slot I/O drawers (if present) populate top
18 I/O Drawer 2 down in Frame Z
17 Location Z15B CPC Drawer 1
16 Location A15A Drawer 1: Z22B, Drawer 2: Z15B
15
14
13
 PCIe 32-slot I/O Drawers populate in remaining locations:
12
11 I/O Drawer 3
PCIe I/O Drawer 1: Z22B, Z15B or Z08B
10
9
Location Z08B PCIe I/O Drawer 2: Z15B, Z08B, or Z01B
8
7
Radiator PCIe I/O Drawer 3: Z08B, Z01B, or A32A
6
5
PCIe I/O Drawer 4: Z01B
4
3
I/O Drawer 4
Location Z01B
PCIe I/O Drawer 5: A32A
2
1

Page 51 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 I/O Drawer Layout
FRAME Z A FRAME Z A
42 SE Server 42 SE Server
41 IBF SE Server 41 IBF SE Server An I/O drawer slot is a physical location in the A
40
39 IBF IBF
40
39 IBF IBF
or Z frame for an I/O drawer or PCIe I/O drawer to
38
37
38
37 PCIe I/O
be inserted = 7u
36 A32A 36 Drawer A PCIe I/O drawer uses 1 I/O frame slot = 7u
35 BPA I/O Drawer 35 BPA
34 Slot 5 34 32 Slots 32 two port I/O slots = 64 ports each
33 33 64 ports
32 32 5 drawers maximum = 160 slots, 320 ports total
31 HUB 31 HUB
30 30 An 8-slot I/O drawer uses 0.7 frame slot = 5u
29 A27A 29
28 CPC Drawer 4 28 CPC Drawer 4 8 four port I/O slots = 32 ports total
27 27
26 Z22B 26 2 drawers carry forward ONLY maximum in
25 I/O Drawer A23A 25
24
I/O Drawer
CPC Drawer 3
I/O frame slots 1 and 2 only
24 Slot 1 CPC Drawer 3
23 23 8 Slots
22 22 32 ports The 8-slot I/O drawers (if present) populate top down in
21 A19A 21
20 CPC Drawer 2
the Z Frame
20 CPC Drawer 2
19 Z15B 19 Drawer 1: Z22B, Drawer 2: Z15B
18 I/O Drawer 18 I/O Drawer
17 Slot 2 A15A 17 PCIe 32-slot I/O Drawers populate in remaining
16 CPC Drawer 1 16 8 Slots CPC Drawer 1
15 15 32 ports locations, starting in the Z Frame:
14 14
13 PCIe I/O
PCIe I/O Drawer 1: Z22B, Z15B or Z08B
13
12 Z08B 12 Drawer PCIe I/O Drawer 2: Z15B, Z08B, or Z01B
11
11 I/O Drawer
10 Slot 3
10 32 Slots PCIe I/O Drawer 3: Z08B, Z01B, or A32A
9 64 ports
9
8 Radiator
PCIe I/O Drawer 4: Z01B
8 Radiator
7
7 PCIe I/O Drawer 5: A32A
6 6 PCIe I/O
5 Z01B 5 Drawer
4
4 I/O Drawer
3 32 Slots
3 Slot 4
2 64 ports
2
1
1

Page 52 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Channel Subsystems
Subchannel Sets
and Partitions

Page 53 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Logical channel subsystems (CSS), subchannel sets (SS), Function
Definitions, and Logical Partitions on z13
 Six Logical Channel Subsystems (CSS) each with four subchannel sets (SS) and up to 256 channels
Maximum channel count includes channels spanned to more than one CSS
Total physical channels depend on I/O features configured
Up to 63.75k base IODEVICEs in SS 0 and 64 k alias IODEVICEs each in SS 1 to SS 3 per CSS
 FUNCTION definition support for virtualized RoCE and zEDC independent of CSS
 Up to 85 Logical Partitions: 15 each in CSS 0 4, 10 in CSS 5 (Partitions B F Reserved)
Only channels and IODEVICEs defined in its CSS can be assigned to an LPAR
Any defined FUNCTION can be assigned to any LPAR

z13
Function Definitions for up to16 RoCE features (31 LPARs each) and 8 zEDC features (15 LPARs each)
CSS 0 CSS 1 CSS 2 CSS 3 CSS 4 CSS 5
Up to 15 Logical Up to 15 Logical Up to 15 Logical Up to 15 Logical Up to 15 Logical Up to 10 Logical
Partitions Partitions Partitions Partitions Partitions Partitions
Subchannel Sets: Subchannel Sets: Subchannel Sets: Subchannel Sets: Subchannel Sets: Subchannel Sets:
SS 0 63.75 k SS 0 63.75 k SS 0 63.75 k SS 0 63.75 k SS 0 63.75 k SS 0 63.75 k
SS 1 64 k SS 1 64 k SS 1 64 k SS 1 64 k SS 1 64 k SS 1 64 k
SS 2 64 k SS 2 64 k SS 2 64 k SS 2 64 k SS 2 64 k SS 2 64 k
SS 3 64 k SS 3 64 k SS 3 64 k SS 3 64 k SS 3 64 k SS 3 64 k

Up to 256 Up to 256 Up to 256 Up to 256 Up to 256 Up to 256


Channels Channels Channels Channels Channels Channels

Page 54 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Cryptography

Page 55 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Crypto Express5S (New Build)
 Native PCIe card (FC #0890)
Resides in the PCIe I/O drawer
Requires CPACF Enablement (FC #3863)
 New Crypto Module
Designed to more than double Crypto Express4S
performance (Added L2 Cache, New Crypto ASIC
and processor upgrade)
Designed to support up to 85 domains for logical
partitions or z/VM guests
 Designed to Meet Physical Security Standards Crypto Express5S
FIPS 140-2 level 4
(FC #0890)
ANSI 9.97
Payment Card Industry (PCI) HSM
(Concept picture)
Deutsche Kreditwirtschaft (DK)
 New Functions, Standard and Compliance
Drivers: NIST via FIPS standards and implementation guidance requirements; emerging banking standards: and
strengthening of cryptographic standards for attack resistance
VISA Format Preserving Encryption (VFPE) for credit card numbers
Enhanced public key Elliptic Curve Cryptography (ECC) for users such a Chrome, Firefox, and Apple's iMessage
New Trusted Key Entry Workstation
Workstation and LIC FC #0847 with new crypto module and TKE LIC 8.0 is required
Required: EP11 (PKCS #11) Mode, Recommended: Common Cryptographic Architecture (CCA) Mode
Additional Smart Cards (FC #0892) Support for stronger encryption than previous cards

Page 56 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Compression and Cryptography Accelerator
Coprocessor dedicated to each core
(Was shared by two cores on z196)
Independent compression engine Core 0
Independent cryptographic engine
Available to any processor type (CP, zIIP, IFL)
2nd Level
Owning processor is busy when its coprocessor is busy Cache
Instructions available to any processor type
IB OB TLB
Data compression/expansion engine
Static dictionary compression and expansion
CP Assist for Cryptographic Function
Supported by z/OS, z/VM, z/VSE, z/TPF, and Linux on z Systems Cmpr 16K
DES, TDES Clear and Protected Key Exp
TDES: Up to double the throughput of zEC12 CPACF
AES128, 192, 256 Clear and Protected Key
AES: Up to double the throughput of zEC12 CPACF Crypto Crypto
SHA-1 (160 bit) Clear Key Cipher Hash
SHA-256, -384, -512 Clear Key
SHA: Up to 3.5 times the throughput of zEC12 CPACF
PRNG Clear Key
DRNG Clear Key
CPACF FC 3863 (No Charge Export Control) is required to enable
some functions and to support Crypto Express5S

Page 57 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
FICON

Page 58 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
FICON Express16S SX and 10KM
For FICON, zHPF, and FCP environments
CHPID types: FC and FCP
2 PCHIDs/CHPIDs
Auto-negotiates to 4, 8, or 16 Gbps
2Gbps connectivity NOT supported FLASH

FICON Express8S will be available 4, 8, 16 Gbps SFP+ HBA IBM


to order for 2Gbps connectivity
ASIC ASIC
PCIe
Increased bandwidth compared to Switch
FICON Express8S 4, 8, 16 Gbps SFP+ HBA IBM
ASIC ASIC
10KM LX - 9 micron single mode fiber FLASH

Unrepeated distance - 10 kilometers (6.2 miles)


Receiving device must also be LX FC 0418 10KM LX, FC 0419 SX
SX - 50 or 62.5 micron multimode fiber
Distance variable with link data rate and
OM3
fiber type
Receiving device must also be SX
2 channels of LX or SX (no mix)
or
Small form factor pluggable (SFP) optics
Concurrent repair/replace action for each SFP OM2

LX/LX OR SX/SX
Page 59 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
New FICON Function on z13
 FICON Express16S - 16 Gbps Link Speeds
Designed with the DS8870 to provide substantially improved DB2 transactional latency and up to 32%
reduction in elapsed time for I/O bound batch jobs.
 32K devices per FICON channel on all types of FICON channel
Up to 85 Logical Partitions: More flexibility for server consolidation
 Fourth subchannel set for each LCSS
Designed to eliminate single points of failure for storage after a disk failure by facilitating the
exploitation of IBM DS8870 multi-target Metro Mirror storage replication with IBM Geographically
Dispersed Parallel Sysplex (IBM GDPS) and IBM Tivoli Storage Productivity Center for Replication
HyperSwap
 Preserve Virtual WWPNs for NPIV configured FCP channels
Designed to simplify migration to a new-build z13
 Improved zHPF Performance at Extended Distance GA June 26, 2015
Can reduce the impact of distance on I/O response times by 50% for large data writes, providing
significant response time improvements for multi-site IBM Parallel Sysplex environments
 Forward Error Correction (FEC) on FICON Express16S GA September 25, 2015
Designed to work with supporting storage capabilities of the Fibre Channel link protocol to enable
operation at higher speeds, over longer distances, with reduced power and higher throughput, while
retaining traditional FICON reliability and robustness
 FICON Dynamic Routing (EBR/OxID compatibility) GA September 25, 2015
Designed to enable exploitation of SAN dynamic routing polices in the fabric to lower cost and improve
performance for supporting I/O devices
 Mainframe SAN Fabric Priority GA September 25, 2015
Mainframe SAN Fabric Priority, with exploiting storage products, extends the z/OS Work Load
Manager (WLM) to the SAN infrastructure providing improved resilience and autonomic capabilities
while enhancing the value of FICON Dynamic Routing

Page 60 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
zHPF and FICON Performance* on z Systems

*This performance data was measured in a controlled environment running an I/O driver program under z/OS. The actual throughput or performance that any user will experience will
vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed.

Page 61 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
FCP Performance* on z Systems
120000 I/Os per second
Read/writes/mix
100000 4k block size, channel 100% utilized 110000

92000
80000 84000
20% increase

60000
60000 FE16S
FE8S 16 Gbps
40000 FE4 FE8 8 Gbps
4 Gbps 8 Gbps
20000 zEC12
zBC12
z10 z196, z10 z196, z114
z13 GA1
0

3000 MegaBytes per second (full-duplex)


Large sequential
2500 Read/write mix
2600

2000 63% increase


FE16S
1500
FE8 16 Gbps
1600
FE4 8 Gbps
4 Gbps FE8S
1000
8 Gbps
500
770 zEC12
520 zBC12
z10 z196, z10 z196, z114 z13 GA1
0
*This performance data was measured in a controlled environment running an I/O driver program under z/OS. The actual throughput or performance that any user will experience will
vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed.

Page 62 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Networking

Page 63 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
OSA-Express5S 1000BASE-T Ethernet Feature - PCIe I/O Drawer
 PCI-e form factor card supported by PCIe I/O drawer
One two-port PCHID/CHPID per card
Half the density of the OSA-Express3 version SFP+
 Two small form factor pluggable (SFP+) transceivers
SFP+
(D1 top, D2 bottom)
 Auto-negotiates to 100 Mbps or 1 Gbps
full duplex only FC #0417
 RJ-45 connector on Cat 5 or better copper cable
 Operates at line speed
Connector = RJ-45
 CHPID TYPE Support:

Mode TYPE Description


OSA-ICC OSC TN3270E, non-SNA DFT, OS system console operations
QDIO OSD TCP/IP traffic when Layer 3, Protocol-independent when Layer 2
Non-QDIO OSE TCP/IP and/or SNA/APPN/HPR traffic
Unified Resource Manager OSM Connectivity to intranode management network (INMN)
OSA for NCP (LP-to-LP) OSN NCPs running under IBM Communication Controller for Linux (CCL)

Note: OSA-Express5S feature are designed to have the same performance and to
require the same software support as equivalent OSA-Express4S features.

Page 64 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
OSA-Express5S fiber optic PCIe I/O drawer
 10 Gigabit Ethernet (10 GbE)
CHPID types: OSD, OSX
Single mode (LR) or multimode (SR) fiber SFP+
One LR or SR SFP+ (D1 top) Dust
1 PCHID/CHPID Cover

Small form factor pluggable (SFP+) transceiver


LC duplex
# 0415 10 GbE LR, # 0416 10 GbE
SR
 Gigabit Ethernet (GbE)
CHPID types: OSD
Single mode (LX) or multimode (SX) fiber SFP+
Two LX or SX SFP+ (D1 top, D2 Bottom) SFP+
1 PCHID/CHPID
Small form factor pluggable (SFP+) transceivers
LC Duplex
# 0413 GbE LX, # 0414 GbE SX
Note: OSA-Express5S features are designed to have the
same performance as equivalent OSA-Express4S features.

Page 65 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
OSA-Express5S and 4S 10 GbE Performance* (laboratory)
Inbound Streams 1492 Byte MTU Mixed Streams 1492 Byte MTU
1400 2500
80% increase
40% increase
1200
2000
1000

1500
800

MBps
MBps

600
1120 1000

400 1680
615 500 1180
200

0 0
OSA-E3 OSA-E5S or 4S OSA-E3 OSA-E5S or 4S
Inbound Streams 8000 Byte MTU Mixed Streams 8000 Byte MTU
1400 70% increase 2500 70% increase
1200
2000
1000

1500 Notes:
800  1 megabyte per second
MBps
MBps

(MBps) is 1,048,576
600 1180 1000
2080 bytes per second

 MBps represents payload


400
680 500
1240 throughput (does
not count packet
200 and frame headers)

0  MTU = Maximum
0 OSA-E3 OSA-E5S or 4S Transmission Unit
OSA-E3 OSA-E5S or 4S
*This performance data was measured in a controlled environment running an I/O driver program under z/OS. The actual throughput or performance that any user will experience will
vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed.
Page 66 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
New Networking Function on z13

 10 GbE RoCE Express Virtualization Support


Designed to enable both ports on a RoCE Express feature and to allow sharing of each RoCE
Express feature by up to 31 logical partitions

 Static VCHID Support for HiperSockets Channels


Designed to facilitate resource management by providing a consistent identifier for HiperSockets
channels

 OSA OSD Channel Multi VSWITCH Link Aggregation (LAG) Support June 26, 2015
Designed to improve z/VM V6.3 virtual networking capabilities and to permit
sharing of supporting OSD channels among multiple z/VM V6.3 images

Page 67 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
10 GbE RoCE
Express

Page 68 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Optimize server to server networking transparently
HiperSockets-like capability across systems
Up to 50% CPU savings
for FTP file transfers
across z/OS systems
versus standard TCP/IP **

zEC12 zBC12
Up to 48% reduction in
response time and
10% CPU savings for a Shared Memory Communications (SMC-R):
sample CICS workload Exploit RDMA over Converged Ethernet (RoCE) with qualities of
exploiting IPIC using service support for dynamic failover to redundant hardware
SMC-R versus TCP/IP ***

Up to 40% reduction in Typical Client Use Cases:


overall transaction
response time for WAS Help to reduce both latency and CPU resource consumption over
workload accessing traditional TCP/IP for communications across z/OS systems
z/OS DB2 **** Any z/OS TCP sockets based workload can seamlessly use
SMC-R without requiring any application changes
Up to 3X increase in
WebSphere MQ messages
delivered across z/OS V2.1 z/VM 6.3 support 10GbE RoCE
z/OS systems **** SMC-R for guests* Express
* All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
** Based on internal IBM benchmarks in a controlled environment using z/OS V2R1 Communications Server FTP client and FTP server, transferring a 1.2GB binary file using SMC-R (10GbE RoCE Express feature) vs standard TCP/IP (10GbE
OSA Express4 feature). The actual CPU savings any user will experience may vary.
*** Based on internal IBM benchmarks using a modeled CICS workload driving a CICS transaction that performs 5 DPL (Distributed Program Link) calls to a CICS region on a remote z/OS system via CICS IP interconnectivity (IPIC), using 32K
input/output containers. Response times and CPU savings measured on z/OS system initiating the DPL calls. The actual response times and CPU savings any user will experience will vary.
**** Based on projections and measurements completed in a controlled environment. Results may vary by customer based on individual workload, configuration and software levels.

Page 69 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 - 10GbE RoCE Express Feature
 Designed to support high performance system interconnect
Shared Memory Communication (SMC) over Remote Direct Memory Access (RDMA)
(SMC-R) Architecture exploits RDMA over Converged Ethernet (CE) - RoCE
Shares memory between peers
Read/write access to the same memory buffers without application changes
Designed to increase transaction rates greatly with low latency and reduced CPU cost
 Configuration
z13 - Both 10 GbE SFP+ ports enabled
z13 - Support for up to 31 Logical Partitions
A switched connection requires an
enterprise-class 10 GbE switch with SR Optics,
Global Pause enabled & 10 GbE SFP+
Priority Flow Control (PFC) disabled 10 GbE SFP+
Point-to-point connection is supported
Either connection supported to z13, zEC12 and zBC12
Not defined as a CHPID and does not consume a CHPID number FC 0411 10GbE RoCE Express
Up to 16 features supported
Link distance up to 300 meters over OM3 50 micron multimode fiber
 Exploitation and Compatibility
z/OS V2.1
IBM SDK for z/OS Java Technology Edition, Version 7.1 (February 24, 2014) OM3 fiber recommended
z/VM V6.3 support for z/OS V2.1 guest exploitation (June 27, 2014)
Linux on z Systems IBM is working with Linux distribution partners to include support in future releases*

*Note: All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these
Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.

Page 70 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13: 10GbE RoCE Express Sample Configuration
On z13 each 10GbE RoCE FC 0411
Can support up to 31 logical partitions
(Two or more features for each server recommended)

IFP

IFP
z13 z13
RoCE RoCE
LPAR A RoCE RoCE LPAR 1
z/OS V2.1 z/VM V6.3 + z/OS V2.1
10 GbE
LPAR B Switch LPAR 2
z/OS V2.1
LPAR C LPAR 3
z/OS V2.1 z/OS V2.1

LPAR D LPAR 4
10 GbE
LPAR E OSA/OSD Switch OSA/OSD LPAR 5
OSA/OSD OSA/OSD
 This configuration allows redundant SMC-R connectivity among LPAR A, LPAR C, LPAR 1, LPAR 2, and LPAR 3
 Both 10 GbE
 LPAR to LPAR OSD connections are required to establish the SMC-R communications
1 GbE OSD connections can be used instead of 10 GbE
OSD connections can flow through the same 10 GbE switches or different switches
z13 exclusive: Simultaneous use of both 10 GbE ports on 10 GbE RoCE Express features
Page 71 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
I/O Feature
Summary

Page 72 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 I/O Features, Channels, Ports, Domains, and Functions
Offered Maximum # Channels, Ports, Increments Purchase
Features
As of features Domains, Functions per Feature increments
FICON (Maximum of 160 features (320 channels) total only if all are FICON Express16S or 8S features.)
FICON Express16S1 NB2 160 3204 channels maximum 2 channels/feature 2 channels
FICON Express8S3 NB 160 3204 channels maximum 2 channels/feature 2 channels
FICON Express84 CF2 16 64 channels maximum 4 channels/feature CF Only
Networking (No more than 48 networking features total counting features of all types. One channel per feature)
OSA-Express5S NB 48 96 ports maximum Ports: 2, 10 GbE 1 1 feature
OSA-Express4S CF 48 96 ports maximum Ports: 2, 10 GbE 1 CF Only
Crypto (No more that 16 crypto features)
Crypto Express5S1 NB 16 85 Domains/Adapter 1 PCIe Adapter 2, 3 - 16
Special purpose These features provide Native PCI FUNCTIONs or Storage Class Memory (SCM)
10GbE RoCE Express NB 16 31 FUNCTIONs/Adapter 2 ports/Adapter 1 feature
Flash Express1 (FC#0403) NB 8 (4 Pairs) 1.4 TB SCM per pair 1 PCIe Adapter 2 (1 Pair)

Flash Express (FC#0402) CF 8 (4 Pairs) 1.4 TB SCM per pair 1 PCIe Adapter CF Pairs Only
zEDC Express NB 8 15 FUNCTIONs/Adapter 1 PCIe Adapter 1 feature

Notes: 1. Bold blue text indicates new features for z13


2. NB = New Build, and if previously offered Carry Forward, CF- Carry Forward ONLY
3. FICON Express8S is offered on New Build to support point to point 2 Gbps attachment
4. Any 8-slot drawer limits maximum memory in any LPAR to 1 TB; One 8-slot drawer limits maximum FICON channels to 288, two 8-slot
drawers limit maximum FICON channels to 256. (These numbers are REDUCED by 4 for each empty slot in an 8-slot drawer.)
Page 73 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Native PCIe
Technology

Page 74 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
PCIe I/O Features Introducing Native (AKA Direct Attach) PCIe
Flash Express, zEDC Express and 10GbE RoCE Express
 Traditional z Systems I/O PCIe Feature
One z Systems ASIC per
Channel/PCHID
Definition and LPAR Assignment
HCD/IOCP CHPID definition or z Systems
ASIC
Firmware definition outside HCD/IOCP is
possible for some. For example: Crypto
Express5S is not defined as a CHPID
Virtualization and support by Channel *
Subsystem LIC on System Assist Traditional z Systems I/O PCIe Features: FICON
Processors (SAPs) Express16S and 8S, OSA-Express5S and 4S, Crypto
 Native PCIe Features Express5S
z Systems ASIC role moved to the
new z Systems I/O Controller (zIOC) in
the PCIe I/O fanout or the processor
Definition and LPAR Assignment
HCD/IOCP FUNCTION definition similar to
CHPID definition but with different rules or
Firmware definition outside HCD/IOCP is
possible for some. For example: Flash
Express is not defined with FUNCTIONs *
Virtualization and support by the zIOC Native PCIe Feature: zEDC Express, 10GbE RoCE
and Redundancy Group LIC running on Express, and Flash Express
the Integrated Firmware Processor (IFP)
(Note: NOT applicable to Flash Express) *PCIe Adapter Connector

Page 75 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Native PCIe FUNCTION definition, assignment and mapping
 Conceptually similar to channel (CHPID) or I/O device definition with different rules

 FUNCTION Definition in HCD or HCM to create IOCP input


Uniquely identified by a hexadecimal FUNCTION Identifier (FID) in the range 000 FFF
NOT assigned to a Channel Subsystem so ANY LPAR can be assigned any FUNCTION.
Has a PARTITION parameter that dedicates it to ONE LPAR or allows reconfiguration among a group
of LPARs. (A FUNCTION can NOT be defined as shared.)
If the intended PCIe hardware supports multiple partitions, has a decimal Virtual Function Identifier
(VF=) in the range 1 n, where n is the maximum number of partitions the PCIe feature supports.
Examples: A RoCE feature supports up to 31 partitions, a zEDC Express feature supports up to 15
May have other parameters specific to the PCIe feature.
For Example, 10GbE RoCE Express requires a Physical Network Identifier (PNETID=).

 FUNCTION Mapping to hardware


Assign a Physical Channel Identifier (PCHID=) to identify the hardware feature in a specific PCIe I/O
drawer and slot to be used for the defined FUNCTION.
Methods:
Manually using the configurator (eCONFIG) AO Data report
With assistance using the CHPID Mapping tool with eConfig Configuration Report File (CFR) input
Note: Unlike CHPIDs, multiple FUNCTIONs can be mapped to the SAME PCHID. This is
conceptually similar to mapping multiple InfiniBand coupling CHPIDs to the same adapter and port.

Page 76 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
IBM zEnterprise Data Compression
(zEDC)

Page 77 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
New hardware data compression accelerator can reduce CPU and storage
Every day 2.5 quintillion bytes of data are created

Compress your data Efficiently compress active data by providing a low CPU, high performance,
4X* dedicated compression accelerator
(efficient system data Industry standard compliance compression for cross platform
compression) data distribution **

Typical Client Use Cases:


Up to 118X reduction in
Significant disk savings with trivial CPU cost for large
CPU and up to 24X BSAM/QSAM sequential files
throughput improvement More efficiently store audit data in application logs
when zlib uses zEDC **
Reduce the amount of data needed for data migration
and backup/restore **
Transparent acceleration of Java compressed applications **
zEDC Express z/VM 6.3 support z/OS V2.1
Data Ready for guests*** zEDC
* The amount of data sent to an SMF logstream can be reduced by up to 75% using zEDC compression reducing logger overhead
** These results are based on projections and measurements completed in a controlled environment. Results may vary by customer based on specific workload, configuration and software levels
*** All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
Page 78 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
zEDC Express feature
 Designed to support high performance data serving by providing:
A tenfold increase in data compression rates with much lower CP consumption than using
software compression, including software compression that exploits the z Systems Compression
Call instruction (z Systems hardware data compression)
A reduction in storage capacity required (creation of storage white space) that in turn reduces
the cost of storage acquisition, deployment, operation, and management

 Configuration:
One compression accelerator per PCIe I/O feature card
Supports concurrent requests from up to 15 LPARs
Sustained aggregate 1 GBps compression rate
when given large block inputs
Up to 8 features supported by zBC12 or zEC12
Minimum two feature configuration recommended

 Exploitation and Compatibility


Exclusive to zEC12 GA2 and zBC12
zEDC Express
z/OS Support: FC # 0420
z/OS V2.1 Hardware exploitation for SMF log data in September 2013, for IBM SDK for
z/OS Java Technology Edition Version 7 Release 1 (5655-W43 and 5655-W44) with APAR
OA43869 for zip and zlib compression, for BSAM and QSAM in 1Q2014 in PTFs for APAR
OA42195, and for DFSMSdss and DFSMShsm SOD* for 3Q2014
z/OS V1.13 and V1.12 - Software support for decompression only, no hardware
compression/decompression acceleration support

 z/VM V6.3 support for z/OS V2.1 guest: June 27, 2014

*Note: All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these
Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.

Page 79 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Flash Express

Page 80 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Why Flash Express on z13?
 Provides Storage Class Memory 10x
Implemented via NAND Flash SSDs (Solid State Drives) Faster response time and 37%
mounted in PCIe Flash Express features increase in throughput compared
Protected by strong AES Encryption done on the features to disk for morning transition
Not defined as I/O devices or with PCIe FUNCTIONs
Assigned to partitions similarly to Main Memory; but, not
in the partition Image Profile. Reconfigurable.
Accessed using the new z Systems architected EADM
28%
(Extended Asynchronous Data Mover) Facility Improvement in DB2 throughput
Designed to enable extremely responsive paging of 4k pages leveraging Flash Express with
to improve z/OS availability Pageable Large Pages (PLP)
Enables pageable large (1 MB) pages

 Flash Express Exploitation


z/OS V2.1, V1.13 + PTFs and RSM Enablement Offering
19%
Reduction in total dump time for
With z/OS Java SDK 7 SR3: CICS TS V5.1, a 36 GB standalone dump
WAS Liberty Profile V8.5, DB2 V11,
IMS 12 and higher, SOD: Traditional WAS 8.0.0x*
CFCC Level 19 with WebSphere MQ for z/OS Version 7
MQ Shared Queue overflow support (March 31, 2014)
Linux on z Systems
SLES 11 SP3 and RHEL 6.4
~25%
Reduction in SVC
dump elapsed time

*Note: All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance
on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.

Page 81 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Flash Express PCIe Adapter Card (Technology Refresh)
Four 400 GByte (G=109) SSDs support
1.4 TBytes (T=240) of Storage Class Memory
(AES encrypted)

Cable connections to form a RAID 10 Array across


a pair of Flash Express Cards.

Page 82 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Parallel Sysplex and
Server Time Protocol

Page 83 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Parallel Sysplex Enhancements
(Introduced with CFCC 20)
 Support for up to 141 ICF processors
The maximum number of logical processors in a CF LPAR remains at 16

 Coupling Links Support


PCIe-O SR 8 GBps 150 m
Up to 16 features (Up to 10 per drawer) = 32 ports
HCA3-O LR 1x 5 Gbps long distance links
Up to 16 features (4 per drawer) = 64 ports
HCA3-O 12x 150 m
Up to 16 features (Up to 4 per drawer) = 32 ports
Internal Coupling (Up to 32 ICP CHPIDs, 16 ICP-ICP Links)
Coupling CHPID definitions
Up to 256 (Increased from 128) June 26, 2015
The maximum defined to one CF partition remains at 128

 PCIe-O SR 8 GBps 150 m links (2 ports per feature)


Up to 4 Coupling CHPID TYPE=CS5 definitions per port, 8 per feature
Cable/point to point maximum distance options:
150 Meters with 12-pair OM4 50/125 micron fiber (Recommended)
100 Meters with 12-pair OM3 50/125 micron fiber
(Note: InfiniBand 12x links use 12-pair OM3 cabling with different connectors)
Estimated Performance Approximately Equivalent to InfiniBand 12x

 Improved Scalability and Support for Large CF Structures

Page 84 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Parallel Sysplex Coupling Connectivity
z196 and z114 zEC12 and zBC12
12x IFB, 12x IFB3, 1x IFB 12x IFB, 12x IFB3, 1x IFB
z13
HCA3-O LR
1x IFB, 5 Gbps 1x IFB, 5 Gbps HCA3-O LR
10/100 km HCA3-O LR HCA3-O LR 10/100 km
HCA2-O LR HCA2-O LR

12x IFB, 6 GBps 12x IFB, 6 GBps


HCA3-O HCA3-O
Up to 150 m HCA3-O HCA3-O Up to 150 m
HCA2-O HCA2-O
ICA SR

HCA3-O LR HCA3-O

Integrated Coupling Adapter (ICA SR) 1x IFB 12x IFB


8 GBps, up to 150 m 5 Gbps 6 GBps
z13 to z13 Connectivity ONLY 10/100 km Up to 150 m

HCA3-O LR HCA3-O HCA2-O and HCA2-O LR are NOT


supported on z13 or future High End z
ICA SR enterprises as per SOD. ISC-3 is not
supported on z13 even if an I/O Drawer
is Carried Forward for FICON Express8.
z10, z9 EC, z9 BC,
z890, z990 z13 Note: The link data rates in GBps or
Not supported in same Gbps, do not represent link
Parallel Sysplex performance. The actual performance is
or STP CTN with z13 dependent upon many factors including
latency through the adapters, cable
lengths, and the workload type.

Page 85 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Coupling Link Details at GA
Offered Maximum # Maximum Increments Purchase
Features
as of features connections per feature increments

PCIe-O ICA SR (GA1) NB 16 32 links1 2 links 2 links


HCA3-O LR (1x) NB/CF 16 64 links2 4 links 4 links
HCA3-O (12x) NB/CF 16 32 links 2 links 2 links
ICP (Standard) NB/CF NA 32 ICP CHPIDs, 16 ICP-ICP Links

Notes: 1 Same physical number of links as 12X PSIFB on zEC12 NB = New build, Migration Offering, z Systems Exchange Program
2 Same physical number of links as 1X PSIFB on zEC12 CF = Carry Forward

Port Link data Fiber Light Maximum Repeated


Link type GA1 Protocol Fiber core Fiber type Cable Connector
Qty rate bandwidth source distance Distance
Short distance
2000 MHz-km OM3 24-fiber cable MTP (split)
HCA3-O fanout (12x IFB) X 2 IFB 6 GBps 50 micron SX 150 meters N/A
850 nm Multimode assembly Tx & RX

PCIe-O SR for Coupling (Fanout PCIe 4700 MHz-km OM4 24-fiber cable
X 2 8 GBps 50 micron SX MTP (new) 150 meters N/A
in CPC drawer) Gen3 850 nm Multimode assembly

2000 MHz-km OM3 24-fiber cable


50 micron SX MTP (new) 100 meters N/A
850 nm Multimode assembly
Long distance
10 km
HCA3-O LR fanout (1x IFB) X 4 IFB 5 Gbps 9 micron 1310 nm Single mode LX 1 fiber pair LC Duplex 100 km
20 km RPQ

Page 86 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
24x PCIe Gen3 Cable OM3/OM4 50/125 m MM Cabling
 24x PCIe Gen3 Cable required for new IBM Integrated Coupling Adapter (ICA SR)

 IBM qualified cables (Part numbers next chart) can be


ordered from Anixter or IBM Global Technology
Cable Distributor:
Anixter ibmcabling@anixter.com or 877-747-2830
Cable Suppliers:
Computer Crafts http://www.computer-crafts.com/
TE Connectivity http://www.te.com/
Fujikura RBFiber@fujikura.com
 Fiber Core 50 / 125 m MM
 Connector Single 24 fiber MPO MPO
 Light Source SX Laser
 Fiber bandwidth @ wavelength (OM4 Recommended)
4700 MHz-km @850 nm OM4 for 150 m Max Length (Strongly Recommended)
2000 MHz-km @850 nm OM3 for 100 m Max Length
 For more information, refer to
IBM z Systems Planning for Fiber Optic Links (FICON/FCP, Coupling Links, and Open System
Adapters), GA23-1407, available in the Library section of Resource Link at
http://www.ibm.com/servers/resourcelink/svc03100.nsf?OpenDatabase

Page 87 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
24x PCIe Gen3 Cable Lengths OM3/OM4 50/125 m MM Cabling
 IBM P/Ns for OM3,OM4 24-fiber cable assembly lengths (for ICA SR)
Item Description IBM Cable P/N Cable Length (m) Cable Type Connector Type

Fiber Optics MPO / 24 OM4 (E1)

Single 24-fiber cable assembly 00JA687 8.0m OM4 MPO-MPO

Single 24-fiber cable assembly 00LU282 10.0m OM4 MPO-MPO

Single 24-fiber cable assembly 00LU283 13.0m OM4 MPO-MPO

Single 24-fiber cable assembly 00JA688 15.0m OM4 MPO-MPO

Single 24-fiber cable assembly 00JA689 20.0m OM4 MPO-MPO


Single 24-fiber cable assembly 00LU284 40.0m OM4 MPO-MPO

Single 24-fiber cable assembly 00LU285 80.0m OM4 MPO-MPO


Single 24-fiber cable assembly 00LU286 120.0m OM4 MPO-MPO

Single 24-fiber cable assembly 00LU287 150.0m OM4 MPO-MPO


Single 24-fiber cable assembly 00LU288 Custom Length < 150.0m OM4 MPO-MPO

Fiber Optics MPO / 24 OM4 (E1)


Single 24-fiber cable assembly 00JJ548 8.0m OM3 MPO-MPO

Single 24-fiber cable assembly 00LU290 10.0m OM3 MPO-MPO


Single 24-fiber cable assembly 00LU291 13.0m OM3 MPO-MPO

Single 24-fiber cable assembly 00JJ549 15.0m OM3 MPO-MPO


Single 24-fiber cable assembly 00JJ550 20.0m OM3 MPO-MPO

Single 24-fiber cable assembly 00LU292 40.0m OM3 MPO-MPO


Single 24-fiber cable assembly 00LU293 80.0m OM3 MPO-MPO

Single 24-fiber cable assembly 00LU294 100.0m OM3 MPO-MPO


Single 24-fiber cable assembly 00LU295 Custom Length < 100.0m OM3 MPO-MPO

Page 88 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Installation Planning
for z13

Page 89 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 New Fill and Drain Tool (FDT) and Lift/Tool Ladder
New FDT: FC #3380
Or order upgrade kit FC #3379 if a zEC12 FDT FC
# 3378 will remain on site

New Universal Lift Tool/Ladder: FC #3105


Or order upgrade kit FC #3103 if a zEC12 Universal
Lift Tool/Ladder FC #3359 will remain on site

System Fill Procedure


Driven through Repair & Verify on SE
15-20 minute procedure
Initial setup includes:
Approximate FDT unit dimensions: Starting R&V
 35 inches from floor to top of handle Gathering FDT, adapter kit, and BTA
 30 inches long water solution
 22 inches wide Plugging FDT into bulk power port on
system

Page 90 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13: HMC Feature Code #0094, Display and Keyboard

HMC 1U System Unit:

HMC Display and Keyboard: IBM 1U 18.5-inch Standard Console

Note: The System unit and tray must be mounted in a customer rack in two adjacent 1U locations in
the ergonomic zone between 21U and 26U. Three C13 power receptacles are required, two
for the System Unit and one for the Display and Keyboard.

Page 91 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Hardware Management Console
 HMC System Unit and LIC Support
New Build: HMC FC 0092 deskside or 0094 rack mounted HMC (0 10 orderable per z13)
Carry Forward: HMCs FC 0091 or FC 0092 can be upgraded to control z13
zEnterprise Ensemble Primary and Alternate HMCs required to support z13
An identical pair is required (Two FC 0094, two FC 0092 or two FC 0091)
At Driver 22 HMC LIC Application level 2.13.0
No-charge ECAs TBD orderable by IBM service will be available to upgrade HMC FC 0092 or
FC 0091 features of another z Systems server to HMC Driver 22 LIC to support z13
 HMC Display Support for HMC FC 0092
22 inch flat panel FC 6096 (No change from zEC12))
 New Backup Options
Critical z13 HMC data: USB Storage and FTP/Secure FTP
Critical z13 SE data: SE/Alternate SE Hard Drive and FTP/Secure FTP
Older machine HMC and SE USB storage only. New optional 32 GB USB stick offered if needed
 HMC application in Driver 22 will support z990 (N-4) and later only
 HMC 1000BASE-T LAN Switches No longer offered
FC 0070 10/100/1000BASE-T switches (Carry Forward Only)
Recommended Alternative: Compatible customer provided 1000BASE-T switches
 See the z13 Library on Resource Link for the latest publications
Installation Manual for Physical Planning for HMC FC 0091, 0092 and 0094 feature physical characteristics
Integrating the HMC Broadband RSF into your Enterprise
Hardware Management Console Operations Guide and Support Element Operations Guide

Page 92 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Physical Planning
 Extend / Maintain zEC12 Datacenter
Characteristics
2 frame base system (CEC, I/O, service system and
PP&C)
No significant increase in weight
Maintain floor tile cutouts for raised floor system
(same as z10 EC, z196, and zEC12)

 Better control of energy usage and improved


efficiency in your data center
Support for ASHRAE Class A2 datacenter
(Up to 35 C and 80% relative humidity)
Upgraded radiator (air) cooling compared to
zEC12 with N+2 pumps and blowers
Upgraded water cooling compared to zEC12
support for 24 C water (was 20 C, 15 on z196)
Same number of power cords (2 or 4) as equivalent
zEC12 configuration
Maintain 27.5 kW box max input power (same as z10
EC, z196, and zEC12)
Maintain DC input power capability, overhead I/O
cabling option, and overhead power options

Page 93 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Installation - Raised Floor options

On raised floor, either


radiator air or water cooling Overhead Power
requires
I/O is supported. Overhead I/O
and feature as
Power
Under
There is NO overhead a co-req.
I/O cables
The support for cooling water Can be routed
Floor supply - return. up or down
RF Tailgate RF Tailgate RF Tailgate RF Tailgate

Line Cords
Overhead
Overhead
I/O Cables I/O
I/O
Power
and
Under
Opt Water Power
the
Floor
RF Tailgate RF Tailgate RF Tailgate RF Tailgate

Top Exit Power option: When selected for a raised floor the Top Exit I/O feature is a coreq. Also the diagram for this configuration should
depict the I/O routing up thru the I/O chimneys and also routing thru the bottom of the frame using the raised floor tailgates.

Page 94 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Installation Non-Raised Floor option

New

Line Cords

I/O Cables
OH I/O
OH
Power

If z13 is NOT installed on a raised floor, overhead I/O, overhead power,


and radiator (air) cooling options are required.

Water cooling is NOT supported. NO cables may exit at floor level.

Page 95 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 New Installation Planning Consideration
 Rear Cover Adjustable Airflow
The IBM z13 has a new rear door design that includes reversible
rear door panels that can be installed two different ways to allow
exhaust airflow to be directed upward or downward.
This design addresses issues experienced by a few datacenters due
to fixed downward exhaust airflow on older z Systems servers. Airflow UP

Action: Advise IBM prior to the install of the desired airflow direction.

 Locking Doors
In response to client requirements, IBM z13 has doors that include
standard key locks compliant with industry standards. There are four
locks, each provided with two keys. Locking the doors or leaving Airflow Down
them unlocked is a client option.
Action: Advise IBM of whether or not the doors are to be locked. It is
a client responsibility to maintain custody of the keys and, if the
doors are to be locked to establish key control procedures, to ensure
that the doors are unlocked promptly whenever required (24x7) for
IBM service, and to ensure they are locked again after service is
complete.

Page 96 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Requirements for Participation in a zEnterprise Ensemble
 Ensemble and Quality of Service (QoS) Features
Ensemble Feature: FC 0025 (always required for ensemble participation)
QoS selection: FC 0019, Manage, level or both FC 0019 and FC 0020, Automate, level
Note: 1. All nodes in the same ensemble MUST have the same QoS feature level
2. Priced Ensemble Blade and IFL Manage/Automate Features no longer exist on z13

 Intra-Node Management network (INMN) connectivity (Always required)


Two OSA-Express 1000BASE-T features to support two required OSM CHPIDs
(Two OSA-Express5S FC 0417 or OSA-Express4S FC 0408 CF only)
Two TYPE=OSM CHPIDs on the above, each cabled to a z13 internal System Control Hub (SCH)

 Intra-Ensemble Data Network (IEDN) connectivity with OSX (Optional)


(Recommended for zBX connectivity, but OSD can be used)
One or more pairs of OSA-Express 10GbE features to support pairs OSX CHPIDs
(OSA-Express5S 10 GbE LR FC 0415 or OSA-Express4S 10 GbE LR FC 0406 CF only)
(OSA-Express5S 10 GbE SR FC 0416 or OSA-Express4S 10 GbE SR FC 0407 CF only)
Ordered to match LR or SR SFP optics features ordered for zBX
Cabled to the matching optics in the IEDN TOR switches in zBX

 Ensemble Primary and Alternate HMCs at Driver Level 22


Identical hardware for both: Two HMC FCs 0091 or 0092 (deskside) or 0094 (rack mount)
Note: At this driver level, the Ensemble HMCs will also support nodes including zEC12, zBC12, z196,
and z114 with or without managed zBX Model 3 or Model 2

Page 97 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Software Support

Page 98 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Operating System Support for z13
Currency is key to operating system support and exploitation of future servers
The following releases of operating systems will be supported on z13
(Please refer to PSP buckets for any required maintenance):

Operating System Supported levels


z/OS  z/OS V2.1 with PTFs (Exploitation)
 z/OS V1.13 with PTFs (Limited Exploitation)
 z/OS V1.12 with PTFs (Service support ended 9/30/2014)
 Note: TSS Service Extension for z/OS V1.12 Defect Support: Offered 10/1/14 9/30/17)
will be required for z13 compatibility at GA
Linux on z Systems  SUSE SLES 12 and 11 (Later releases: GA support TBD by SUSE.)
 Red Hat RHEL 7 and 6 (Later releases: GA support TBD by Red Hat.)
z/VM  z/VM V6.3 with PTFs Exploitation support
 z/VM V6.2 with PTFs Compatibility plus Crypto Express5S support
 Note: z/VM 5.4 NOT Compatible even though still in service until 12/31/2016
z/VSE  z/VSE V5.2 with PTFs - Compatibility plus Crypto Express5S (up to 85 LPARs)
 z/VSE V5.1 with PTFs Compatibility (End of service 6/30/2016)
z/TPF  z/TPF V1.1 with PTFs Compatibility

Page 99 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
On Demand

Page 100 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Basics of Capacity on Demand
Upgrade to z13: Installed On Demand records: zAAPs are converted to zIIPs and the record is migrated staged.
Upgrade to z13: Staged On Demand records: Records with zAAPs are rejected. Others are migrated staged.

Capacity on Demand

Permanent Upgrade (CIU) Temporary Upgrade

Replacement Capacity Billable Capacity


(On/Off CoD)

Capacity Backup Capacity for Planned Event


(CBU) (CPE)
Pre-paid Post-paid

Using pre-paid On/Off CoD with tokens On/Off CoD with tokens
unassigned No expiration 180 days expiration
capacity up to the Capacity Capacity
limit of the HWM - MSU % On/Off CoD - MSU %
No expiration - # Engines 180 days expiration - # Engines
Capacity Tokens Capacity Tokens
- MSU days - MSU % - MSU days
- MSU %
- Engine days - # Engines - Engine days
- # Engines
Page 101 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Food for Thought

Page 102 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
PR/SM Partition Logical Processor and Memory Assignment

 System z9 EC to zEnterprise EC12


Memory Allocation Goal: Stripe across all the available books in the machine
Advantage: Exploit fast book interconnection; spread the memory controller work;
smooth performance variability
Processor Allocation Goal: Assign all logical processors to one book; packed into
chips of that book. Cooperate with operating system use of HiperDispatch
Advantage: Optimal shared cache usage
 z13
Memory Allocation Goal: Assign all memory in one drawer striped across the two
nodes.
Advantage: Lower latency memory access in drawer; smooth performance
variability across nodes in the drawer
Processor Allocation Goal: Assign all logical processors to one drawer; packed
into chips of that drawer. Cooperate with operating system use of HiperDispatch
Reality: Easy for any given partition. Complex optimization for multiple logical
partitions because some need to be split among drawers.

Page 103 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
z13 Processor and Memory Assignment and Optimization
 Default processor assignments by POR, MES adds, and On Demand activation:
Assign IFLs and ICFs to cores on chips in high drawers working down
Assign CPs and zIIP in low drawers working up.
Objective: Keep Linux Only, IBM zAware and Coupling Facility using IFLs and ICFs away from ESA/390 partitions
running z/OS on CPs and zIIPs and in different drawers if possible.
 PR/SM makes optimum available memory and logical processor assignment at activation
Logical Processors specified in the Image Profile, are assigned a core if Dedicated or a home drawer, node and chip
if Shared. Later, if it becomes a HiperDispatch Vertical High, a Shared Logical Processor is assigned a specific core.
Ideally assign all memory in one drawer with the processors if everything fits
With memory striped across drawers with processors if memory or processors must be split
 PR/SM optimizes resource assignment when triggered
Triggers: Available resources changes: partition activation or deactivation or significant processor entitlement changes,
dynamic memory increases or processor increases or decreases (e.g. by CBU) or MES change.
Examines partitions in priority order by the size of their processor entitlement (dedicated processor count or shared
processor pool allocation by weight) to determine priority for optimization
Changes logical processor home drawer/node/chip assignment
Moves processors to different chips, nodes, drawers (LPAR Dynamic PU Reassignment)
Relocates partition memory to active memory in a different drawer or drawers using the newly optimized Dynamic
Memory Relocation (DMR), also exploited by Enhanced Drawer Availability (EDA).
If available but inactive memory hardware is present (e.g. hardware driven by Flexible or Plan Ahead) in a drawer where
more active memory would help: activate it, reassign active partition memory to it, and deactivate the source memory
hardware, again using DRM.
(PR/SM can use all memory hardware but concurrently enables no more memory than the client has paid to use.)

Page 104 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
SOD KVM

Page 105 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Standardized virtualization for z System
SOD at announcement for KVM optimized for z System
 Expanded audience for Linux on z Systems
z System Host
KVM on z System will co-exist with z/VM
Attracting new clients with in house KVM skills
Simplified startup with standard KVM interfaces
 Support of modernized open source KVM

Linux on z Sys
Linux on z Sys

Linux on z Sys

Linux on z Sys

Linux on z Sys
hypervisor for Linux
Provisioning, mobility, memory over-commit

z/OS

z/OS
z/OS
Standard management and operational controls
Simplicity and familiarity for Intel Linux users
 Optimized for z System scalability, z/VM KVM
performance, security and resiliency
Standard software distribution from IBM PR/SM

 Flexible integration to cloud offerings z CPU, Memory and IO


Standard use of storage and networking
drivers (including SCSI disk) Support Element
No proprietary agent management
Off-the-shelf OpenStack and cloud drivers
Standard enterprise monitoring and automation A new hypervisor choice
(i.e. GDPS) for Linux on the mainframe

All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on
these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.
Page 106 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation
Page 107 IBM z13 Overview for DFW System z User Group 2015 March 2015 IBM Corporation

Vous aimerez peut-être aussi