Vous êtes sur la page 1sur 40

SAP AG

IBU Utilities

SAP AG IBU Utilities Cookbook IS Migration Performance Analysis and Improvement Version 1.6 Released Version 1.6

Cookbook IS Migration Performance Analysis and Improvement

Version 1.6

Released

Analysis and Improvement of IS-U Migration Performance

Copyright

© Copyright 2005 SAP AG. All rights reserved.

No part of this brochure may be reproduced or transmitted in any form or for any purpose without the express permission of SAP AG. The information contained herein may be changed without prior notice.

SAP AG further does not warrant the accuracy or completeness of the information, text, graphics, links, or other items contained within these materials. SAP AG shall not be liable for any special, indirect, incidental, or consequential damages, including without limitation, lost revenues or lost profits, which may result from the use of these materials. The information in this documentation is subject to change without notice and does not represent a commitment on the part of SAP AG for the future.

Some software products marketed by SAP AG and its distributors contain proprietary software components of other software vendors.

Microsoft®, WINDOWS®, NT®, EXCEL® and SQL-Server® are registered trademarks of Microsoft Corporation.

IBM®, DB2®, OS/2®, DB2/6000®, Parallel Sysplex®, MVS/ESA®, RS/6000®, AIX®, S/390®, AS/400®, OS/390®, and OS/400® are registered trademarks of IBM Corporation. OSF/Motif® is a registered trademark of Open Software Foundation. ORACLE® is a registered trademark of ORACLE Corporation, California, USA. INFORMIX®-OnLine for SAP is a registered trademark of Informix Software Incorporated. UNIX® and X/Open® are registered trademarks of SCO Santa Cruz Operation. ADABAS® is a registered trademark of Software AG. SAP®, R/2®, R/3®, RIVA®, ABAP/4®, SAP ArchiveLink®, SAPaccess®, SAPmail®, SAPoffice®, SAP-EDI®, R/3 Retail®, SAP EarlyWatch®, SAP Business Workflow®, ALE/WEΒ®, Team SAP®, BAPI®, Management Cockpitare registered or unregistered trademarks of SAP AG.

Performance analysis and runtime improvement of IS-U migration

About This Document

Topics

Data migration (data conversion) especially for component IS-U/CCS Performance analysis of migration load programs Measures to improve performance of migration

Authors

Thomas Sälinger (SAP AG) Friedrich Keller (SAP AG)

History of Versions

Version

Date

Unit

Changes

1.0

   

(Not released in English)

1.1

15.05.2001

 

First version

1.2

22.10.2001

 

Amount of buffered numbers New chapter: local update processing Switching-off buffering of table EHAUISU New: notes 414348 and 416395 New: note 381380 Table buffering TE431, TE431T, TE493, TE551 Reference to note 170989 when using DBMS Oracle New: note 306155 New chapter regarding migration object PROPERTY New chapter reg. Mig. Objects NOTE_CON/NOTE_DLC New function module as of release 4.62

1.3

14.02.2002

 

New: note 481199 New: note 489911 New: notes 481626, 489684, NR-Puffer ADRNR/ADRV New: notes 445399, 481199, 481005, 492467, 494498 Hint/help report: access to EQUZ~H ? (note 456066) New chapter regarding migration object FACTS New chapter regarding migration object SECURITY New chapter regarding migration object BCONTACT New chapter regarding migration object PART_REL New chapter regarding migration object DEVGRP The modifications are now available as notes

Analysis and Improvement of IS-U Migration Performance

1.4

14.01.2003

7.7

Note 416395 is obselete New: note 570865

7.9

New: note 522412

7.18

New: structure of data objects in import file

7.19

New: notes 499935 and 581816

7.21

New: structure of data objects in import file

8

New paragraph migration control parameter

1.5

23.08.2004

 

Notes with a release date before 2003 have been removed:

7.7

PARTNER notes 335103, 414348, 570865

7.9

INST_MGMT notes 492276, 494498, 366323, 381380, 481005, 492467, 522412

7.10

MOVE_IN notes 353172, 306155

7.12

PAYMENT note 371331

7.13

BBP_MULT notes 371331 386380

7.15

PROPERTY note 426007

7.18

FACTS note 488403 ChapterParagraph BCONTACT removed ChapterParagraph PART_REL removed

7.20

DEVGRP note 492276

7.21

METERREAD notes 535655, 538856

7.1

New chapter relevant for all migration objects

7.3

CONNOBJ new note 730548

7.6

INSTLN new note 733062

7.7

PARTNER new notes 720223, 735229, 752926, 760709

7.8

ACCOUNT new note 759574

7.9

INST_MGMT new notes 753811 and 675576

7.15

PROPERTY new note 629891

7.19

SECURITY new note 756525

7.21

METERREAD new notes 587154, 602338, 677592, 722028

7.22

New chapter regarding migration object CONSUMPT

7.23

New chapter regarding migration object POD

8.13

New migration control parameter POD_BILLPER_OFF

1.6

03.01.2006

7.3

CONNOBJ new note 612815 and 805551

7.6

INSTLN new notes 771560 and 771843

7.7

PARTNER new notes 612815, 713101, 771952 and 805551

7.10

MOVE_IN new note 771606

7.11

DOCUMENT new notes 687967 and 775917

7.12

PAYMENT new note 775917

7.18

FACTS new notes 771560, 771845, 772261, 775917, 776162

7.9/7.21

INST_MGMT and METERREAD buffer database tables

7.24

New chapter regarding migration object DEVICEMOD

7.25

New chapter regarding migration object DEVINFOREC

Performance analysis and runtime improvement of IS-U migration

Contents:

  • 1 Introduction

8

  • 2 Data Migration Using the IS Migration Workbench

9

  • 2.1 The IS Migration Workbench

....................................................................................................

9

  • 2.2 Technical Basics of Data Migration

...........................................................................................

9

  • 2.3 Starting and Scheduling Import Runs

10

  • 3 Monitoring Import Runs

...............................................................................................................

11

  • 3.1 Migration Statistics

..................................................................................................................

11

  • 3.2 Error Log .................................................................................................................................

11

  • 3.3 Job Log

...................................................................................................................................

11

  • 3.4 The Job Overview

...................................................................................................................

12

  • 3.5 The Process Overview

............................................................................................................

12

  • 3.6 The Lock Table Overview

12

  • 3.7 The Performance Trace

..........................................................................................................

12

  • 3.8 Other Monitoring Functions

.....................................................................................................

12

  • 4 Basic Comments on Measuring Performance

13

  • 4.1 ..............................................................................................................................

Throughput

13

  • 4.2 ...................................................................................................................................

Runtime

13

  • 4.3 Possible Improvements

...........................................................................................................

13

  • 5 Basic Techniques for Performance Optimization

.....................................................................

14

  • 5.1 Parallel Execution

14

  • 5.1.1 Parallel Execution– Configuration of Work Processes/Distribution of Load

14

  • 5.1.2 Parallel Execution – Import Files

...................................................................................................

14

  • 5.1.3 Parallel Execution – Access to Number Ranges

............................................................................

15

  • 5.2 Commit Buffering

....................................................................................................................

15

  • 5.3 Switching-Off Functions

16

  • 5.3.1 Switching-Off Functions – Change Documents

16

  • 5.3.2 Switching-Off Functions – Statistics (Stock/Transaction Statistics, PMIS)

.....................................

16

  • 5.3.3 Switching-Off Functions – Workflow Events

17

Table Buffering

  • 5.4 ........................................................................................................................

17

  • 5.5 Switching-on local update task processing

.............................................................................

18

  • 6 Influences of Hardware / Operating System / DBMS

................................................................

19

General Factors

  • 6.1 ......................................................................................................................

19

  • 6.2 Optimum System and DBMS Settings

....................................................................................

19

  • 6.3 Cost-Based Optimizer

.............................................................................................................

19

  • 7 Hints for Particular Migration Objects

21

  • 7.1 All Migration Objects

21

DEVICE

  • 7.2 ...................................................................................................................................

21

CONNOBJ

  • 7.3 ...............................................................................................................................

21

PREMISE

  • 7.4 ................................................................................................................................

21

DEVLOC

  • 7.5 .................................................................................................................................

21

INSTLN

  • 7.6 ...................................................................................................................................

22

PARTNER

  • 7.7 ...............................................................................................................................

22

  • 7.8 ACCOUNT / ACCOUNT2

22

  • 7.9 INST_MGMT/INSTALL

22

Analysis and Improvement of IS-U Migration Performance

MOVE_IN

  • 7.10 ............................................................................................................................

23

DOCUMENT

  • 7.11 ........................................................................................................................

23

  • 7.12 ...........................................................................................................................

PAYMENT

23

  • 7.13 ..........................................................................................................................

BBP_MULT

24

DEVICERATE

  • 7.14 ......................................................................................................................

24

PROPERTY

  • 7.15 .........................................................................................................................

24

  • 7.16 NOTE_CON/NOTE_DLC

24

  • 7.17 ....................................................................................................................

RIVA migration

24

FACTS

  • 7.18 .................................................................................................................................

24

SECURITY

  • 7.19 ..........................................................................................................................

24

DEVGRP

  • 7.20 .............................................................................................................................

25

  • 7.21 ......................................................................................................................

METERREAD

25

CONSUMPT

  • 7.22 ........................................................................................................................

25

POD

  • 7.23 .....................................................................................................................................

25

DEVICEMOD

  • 7.24 .......................................................................................................................

25

DEVINFOREC

  • 7.25 .....................................................................................................................

25

  • 8 Migration Control Parameter

.......................................................................................................

26

Basic Comments

  • 8.1 .....................................................................................................................

26

  • 8.2 Parameter INST_BILLPER_OFF

............................................................................................

26

  • 8.3 Parameter INST_CERT_CHECK_OFF

26

  • 8.4 Parameter INST_CERT_LOSS_OFF

26

  • 8.5 Parameter INST_CONS_BILL_OFF

.......................................................................................

26

  • 8.6 Parameter INST_CONS_OBJ_OFF

.......................................................................................

26

  • 8.7 Parameter INST_DISMANT_CH_OFF

26

  • 8.8 Parameter INST_POD_OBJ_OFF

..........................................................................................

26

  • 8.9 Parameter INST_POD_PRO_CH_OFF

..................................................................................

27

  • 8.10 Parameter INST_PREDEF_EAST_OFF

27

  • 8.11 Parameter MOI_POD_SERV_CH_OFF

..............................................................................

27

  • 8.12 Parameter READ_BILLPER_OFF

.......................................................................................

27

  • 8.13 Parameter POD_BILLPER_OFF

.........................................................................................

27

  • 9 Organizational Measures

.............................................................................................................

28

  • 9.1 Pre-Migration of Migration Objects Before Go-Live

................................................................

28

  • 9.1.1 Regional Structure Data

28

  • 9.1.2 Device Master Data

.......................................................................................................................

28

  • 9.1.3 Technical Master Data

...................................................................................................................

28

  • 9.2 Optimization of Parallel Actions

29

  • 9.3 Reduction of Data Volume ......................................................................................................

29

  • 9.4 Size of Import Files

29

  • 9.5 Restart Option

.........................................................................................................................

30

10

Performance Testing

................................................................................................................

31

  • 10.1 Load Tests in the Migration Project

.....................................................................................

31

  • 10.2 Performing a Performance Test

31

  • 10.2.1 Single Mode Test

...........................................................................................................................

31

  • 10.2.2 Simple Parallel Execution Test

32

  • 10.2.3 Import with Maximum Load

............................................................................................................

32

  • 10.2.4 Full Test Migration (Simulation of Production Load)

32

Performance analysis and runtime improvement of IS-U migration

  • 11 Other Functions in Releases 4.62 and 4.63

33

 
  • 11.1 Job Scheduler in Release 4.62

33

  • 11.2 Distributed Import in Release 4.63

......................................................................................

33

Further Reading

  • 12 ........................................................................................................................

34

Conclusion

  • 13 ................................................................................................................................

35

A.

Appendix

36

A.1

Modification M1: Switching-Off Change Documents in Central Address Management

36

A.2

A.3

A.4

A.5

A.6

Modification M2: Switching-Off Change Documents for SD Customers

.................................

.............................

...................................................

36

Modification M3: Switching-Off Change Documents for BP Tax Numbers

36

Modification M4: Switching-Off Change Documents for Contract Accounts Modification M5: Switching-Off Stock Statistics for Devices

36

37

Modification M6: Switching-Off Hierarchy Checks During Installation of Equipment

37

A.7

Modification M7: Switching-Off ‚Bypassing Buffer‘ on Installation of Devices of Gas Division37

A.8

Typical Throughput Figures

38

A.9

......................................................................

39

Analysis and Improvement of IS-U Migration Performance

1

Introduction

During the implementation of FI-CA based solutions (e.g. IS-U/CCS), an unusually large data volume has to be extracted from one or more legacy systems, and moved to the database of the R/3 system.

Since the production system cannot tolerate an extended downtime (also unacceptable for customer care and contract accounting reasons), this data transfer is subject to high performance demands. It is therefore necessary to optimize all the involved resources. To achieve this, the migration project must conduct tests to study the expected runtime of the data transfer so that, if the results are unsatisfac- tory, performance-improving measures can be examined and applied.

This document is intended to assist you with migration, by providing an insight into the fundamentals of the performance of a data transfer executed with the standard tool - the IS Migration Workbench. The information, recommendations, and results are mainly based on the experiences of implementa- tion projects running release 4.62 and higher.

To understand this document, you must be familiar with the basic functionality and the main terminol- ogy of the FI-CA component, the related industry solution and the IS Migration Workbench. SAP Basis knowledge is also required.

The document describes how migration and the load programs work, and then focuses on how to increase the load performance using the Migration Workbench, the migration objects, and the system functions. Other units give recommendations on how to prepare load performance tests, monitor and check the data transfer, and evaluate the results.

An appendix contains code modifications that can improve performance when loading large data vol- umes. Additionally, the appendix offers a collection of typical throughput figures for the most important migration objects, and a checklist (in case you encounter performance problems).

Performance analysis and runtime improvement of IS-U migration

  • 2 Data Migration Using the IS Migration Workbench

This unit provides a brief overview of the technical basics used in the data transfer for FI-CA based SAP solutions.

  • 2.1 The IS Migration Workbench

SAP’s standard tool for executing the data transfer (also called data migration) of FI-CA based solu- tions is the IS Migration Workbench. This program suite covers all the functions necessary for the preparation and execution of the data transfer.

The Migration Workbench contains a user manual describing the most important terms, processes, and functions. Additionally, you can attend a Migration workshop.

Alongside the functions configured by the user, the Migration Workbench hosts the administration of the import runs. This document does not describe any other function of the Migration Workbench, besides the data import itself.

  • 2.2 Technical Basics of Data Migration

Data transfer using the IS Migration Workbench works on the basis of Business Objects. This means that data is not inserted into the database table by table. Instead, all table changes needed to create an instance of a business object (for example one business partner) are executed together.

This logical unit is called a Migration Object. Data is created in the database with the help of function modules that share most of their program code with the corresponding dialog function. This ensures maximum quality of the consistency checks during data creation.

The technology used in these modules is Direct Input. This avoids screen processing, unlike Batch- Input (or BDC, Batch Data Communication), and results in improved performance.

The load report (Import program) is not shipped. SAP’s standard shipment only contains several in- clude reports. Instead, the load report is generated in the customer’s system once a specific migration object has been configured (to be more precise the ABAP source code is recreated, and is activated). At certain points in the load report, you can include calls to function modules, subroutines, or simply to a series of ABAP statements. These points are called (generation) time points; the statements and modules are called events. This technique is also frequently used when resolving performance prob- lems.

All migration objects are designed to import data in a parallel execution, which makes full use of all available system resources, and reduces the time needed for the data transfer as much as possible. This does not only concern parallel execution of the import of one migration object, but also the im- ports of different migration objects (restricted only by logical dependencies and locks). Real excep- tions are only effective for some objects in the device management component. These can only be imported in a strictly chronological order .

1

1 This restriction mainly applies to migration objects INST_MGMT (or INSTALL), DEVICEMOD, DEVGRP, DEVICEREL and METERREAD. The necessity for chronological (and therefore sequential) processing only holds for the data processed by several of these migration objects. All device installations without any other dependency can still be imported in parallel execution.

Analysis and Improvement of IS-U Migration Performance

  • 2.3 Starting and Scheduling Import Runs

An import run is always executed on the basis of one sequential import file, which usually resides on the file system of the application server. To avoid multiple physical storage of these (usually very large) files, the directory with the import files exists only once and - on the level of the operating sys- tem – is shared (‘mount’ or ‘share’) between all application servers.

The import screen is called using menu option Mig.Object –> Import in the Migration Workbench. Here you can start import runs by supplying parameters such as the import file name. An import run can be started in dialog mode (using the current work process) or in background mode (using a BGD work process). In background mode, it is possible to schedule the job to start immediately, or at a later point.

Additionally, there is a function for scheduling several import runs as background jobs simultaneously (use menu option Utilities –> Schedule Jobs on the import screen). In this function, you get a list of all available import files for the selected migration object. In order to schedule the import, you have to select one or more files to be imported, enter the start date/time, and an application server for the import execution. This function enables you to start or schedule import runs in background mode on several application servers at the one push of a button, and without entering all the individual file names.

In all the import functions in the IS Migration Workbench, a selection variant for the load report is auto- matically created. This contains all the necessary start parameters. If an import run is to be started from outside of the IS Migration Workbench, this variant must be created manually. To avoid incorrect parameters, start all import runs using the func tions offered by the IS Migration Workbench.

You can find more functions for starting migration import runs in unit 11.

Performance analysis and runtime improvement of IS-U migration

  • 3 Monitoring Import Runs

This unit provides an overview of the functions that help you to monitor and analyze running and al- ready finished import runs. The first three functions are migration monitors; all other functions deal with technical monitoring.

  • 3.1 Migration Statistics

Every import run started for a migration object is accompanied by a migration statistic (record). This statistic record makes it possible to track the progress of one specific import run. Statistic records are technically implemented as records in the TEMSTATISTIK database table.

You can access the monitor for these records in the Migration Workbench. On the import screen, an object-related overview is displayed. A general overview is available in the main menu. This function is the central tool for monitoring import runs and is also available by running the ABAP report REMIG007 separately.

Here you can find figures that are important for analyzing import runs, such as the number of imported or rejected records, and the recent throughput, which is given in data objects per hour (useful in large volume migrations, see unit 4.1). The migration statistics display is the central place to check during the execution of an import run, and after it has finished.

The regular view of the statistics records shows one line of information per import run. You can use the expand icon for some or all statistics records that you selected, to display more detailed informa- tion, such as the name of the imported sequential file. In this detailed view the functions ‘Display Job Log’ (import run in background) and ‘Display Error Log’ (for finished import runs) are also available.

  • 3.2 Error Log

All messages of an import run are collected in the Error Log. Technically speaking, this log is imple- mented using the application log (which is the common logging function in SAP).

During an import run, the error log receives all error messages raised by the load report that led to the rejection of the transferred data objects. If the error is detected in the used service function module, the original error message is accompanied by a special migration message stating which data record is in error. This log is only available once the import run has finished. We recommend you check the error log after each import run.

You can display the error log in different ways:

  • 1. On the import screen, there is a push button ‘Display Log’ to view logs of finished import runs.

  • 2. The log is available from the detailed view of the migration statistics (see previous unit).

  • 3. The standard display of the application log is called using transaction SLG1 or menu path Utili- ties IndustryToolsApplication LogAnalyze): Logs for migration import runs are written with object IUMI (Migration).

  • 3.3 Job Log

Technical information about migration import runs executed in background work processes, is re- corded in the job log. Here you can find information about the parameters used for this run. If a back- ground import run is terminated abnormally, you can find the canceling reason in the job log. We rec- ommend that you check the job log after the end of each background import run.

The job logs can be selected and accessed via the common Job Overview function (transaction SM37, see unit 3.4). An easier way is to access this information directly from the detailed view of the migra- tion statistics (see unit 3.1).

As mentioned before, the error log is not available before the import run has ended. You can, how- ever, copy the error messages of the run to the job log, by setting a flag in the migration user parame- ters. This enables you to view the errors as they appear, and you are therefore able to react faster in cases of errors. This is especially important during the productive load.

Analysis and Improvement of IS-U Migration Performance

  • 3.4 The Job Overview

The job overview (menu option SystemServicesJobsJob overview or transaction SM37) offers status information for all the jobs in the system, regardless of whether they are currently running, fin- ished or scheduled to run later. You can restrict the selection by entering several criteria.

For more information on the administration and monitoring of background jobs in an R/3 system, con- sult the manuals for R/3 administration or the R/3 documentation under Basis Computing Center Management System Background Processing.

The job logs of an import run are also available in the migration statistics.

  • 3.5 The Process Overview

To obtain an overview of the background processes that are currently active, you can use the work process overview function. Use transactions SM66 (global view, all application servers) and SM50 (local view) to call this up.

Here, you can determine how many, and which work processes of your system are currently in use. There is also detailed information available, such as the number of database accesses, CPU time consumed, and memory allocation. Additionally, you can see if there are work processes waiting for system resources (such other work processes, network, and semaphores).

More information is available in the documentation under Basis Computing Center Management System R/3 System Monitoring.

  • 3.6 The Lock Table Overview

With transaction DB01 you can view a list of all database locks which are currently active. This list contains information about the locked table, the locking work process as well as the waiting work pro- cesses.

You can use this analysis to detect lock situations resulting from write locks generated by import runs or other processes in the system.

  • 3.7 The Performance Trace

The Performance Trace (menu option SystemUtilitiesPerformance Trace or transaction ST05) offers different functions for tracing, and thorough examination of the system behavior during database accesses, calls to the enqueue server and remote function calls.

Here you can analyze the quality of the database accesses of an import process. If a SQL statement shows an excessively high execution time, you can use the EXPLAIN function to show the execution plan of this statement on the database.

You can also estimate if there could be an improvement of system load and performance, by buffering tables that can be physically accessed (see also unit 5.4).

For further information see the documentation for Basis/ABAP Workbench/BC ABAP Workbench Tools/ABAP Workbench: Tools/Performance Trace.

  • 3.8 Other Monitoring Functions

In addition to the functions mentioned, other monitoring functions may and should be used to check figures, such as CPU load, I/O actions, network response times, and to spot possible bottlenecks. These functions include other SAP transactions (e.g. transaction ST06) as well as other programs at operating system or database management level.

Performance analysis and runtime improvement of IS-U migration

  • 4 Basic Comments on Measuring Performance

In order to measure and compare the performance of migration processes, we need an index that uniquely describes this quantity.

  • 4.1 Throughput

First of all, we have to define a quantity that describes the action of migration: a Data Object is an example of a business object (such as a business partner or a financial document) or a fully proc- essed business process (such as the installation of a device in a device location, or the allocation of a relationship between devices), which is created or processed during an import run. A data object usu- ally consists of more than one physical record on the import file.

Using this we define Throughput as the number of data objects per time interval (the unit of meas- urement being obj/hr [objects per hour]). The throughput can be calculated using the start and end times of an import run, together with the number of imported data objects. This figure is also listed in the migration statistics display (see unit 3.1).

The import of one particular migration object has a more or less constant performance or throughput. This is due to the fact that most migration objects create data objects that are uniform in their data- base representation, and consist of roughly the same number of database table rows. As an example, all premises are represented by exactly two rows on the database. Therefore this document considers the throughput of migration objects to be a constant for a particular migration object in a particular migration project.

  • 4.2 Runtime

In a migration project, the amount of data and the structure of the data in the legacy system define how many data objects have to be created with the help of a migration object. To import this number n of data objects for a single migration object with throughput d obj/hr, you need (using one single job) a Runtime of:

l = n / d

hr .

Example:

Importing business partners (migration object PARTNER) runs at a throughput of say d = 10.000 obj/hr. There is a file with 100.000 data objects to load, therefore we expect the required runtime to be:

l = 100.000/10.000 obj/(obj/hr) = 10 hr.

If the migration project has to load 1 million data objects, a runtime of approximately l = 100 h would be needed. However, since the allowed downtime of the production environment is usually restricted to a day or two, this is clearly unacceptable. Looking at the previous example, this implies that the runtime l must be reduced.

  • 4.3 Possible Improvements

Assuming that more than one process is simultaneously importing data for one migration object, the runtime can be drastically reduced. If k processes are importing data at speed (throughput) d, you would calculate the total throughput in the following way D = k * d obj/hr.

Therefore, the total runtime is calculated

L = n / D = n / (k * d)

hr

Looking at this formula it is obvious that an improvement of total runtime is possible by an:

Increase of the throughput d of the used migration object Increase of the number k of parallel import jobs

Consequently we should try to achieve an optimum throughput for one migration object, and at the same time distribute the data import to as many parallel processes as possible.

The first demand can be satisfied by settings in both the Migration Workbench and the system (and database) parameters. The second consideration requires a highly parallel execution of the import process. This is the main focus of the next unit.

Analysis and Improvement of IS-U Migration Performance

  • 5 Basic Techniques for Performance Optimization

    • 5.1 Parallel Execution

As mentioned in the previous unit, the technical implementation of the IS Migration Workbench (and the generated load reports) allow for parallel execution of the data import. You can run load reports in parallel for both the same migration object and for different migration objects. As an example, you could simultaneously load 5 files for migration object PARTNER and 5 files for migration object CON- NOBJ.

With the example from the previous unit, you could simply calculate that 1 million data objects are loaded in 100 parallel background jobs. Theoretically, this would reduce the total runtime to 1.000.000 / (100 * 10.000) = 1h. This is of course only possible in exceptional cases since only very few (ex- tremely powerful) hardware configurations can cope with 100 parallel jobs.

For parallel execution to take place, you must meet the following prerequisites:

There is a sufficient number of BGD work processes.

There is a sufficient number of import files available that do not overlap (for example, one data object or legacy system key is only contained in one file) and are complete per data object.

The involved number ranges are buffered on the application servers (note: this is not always pos- sible for FI documents).

The next units deal with these prerequisites.

  • 5.1.1 Parallel Execution– Configuration of Work Processes/Distribution of Load

To run a certain number of jobs in parallel you have to configure an appropriate number of BGD work processes. Some of the import processes, however, are also implemented to use other work process types, such as UPD (Asynchronous Update) and UP2 (Deferred Asynchronous Update).

You can change the input behaviour by switching-off some functions that use UPD/UP2 work proc- esses (for details see unit 5.3). If this is not possible, ensure that you have configured enough proc- esses of this type of work process. In this case a good ratio for work processes BGD:UPD:UP2 is ap- proximately 1:1:1 (or even 1:1:2). If the in 5.3 described functions can be switched off the number of required UP2 work processes can be reduced. A test will determine the ratio best suited for your par- ticular project.

Additional dialog (DIA) work processes that are used for other tasks, such as monitoring, log on re- quests, and RFC handling, are also required.

Every hardware configuration has a maximum usable load for this system environment. If, for exam- ple, you add more and more jobs in parallel execution on a two-CPU application server, you will quickly overload the system. See unit 10.2.3 for an initial estimate of the maximum number of jobs able to start in one R/3 instance.

If your system consists of more than one application server, but you only start import runs on one, the active server will be highly loaded (or even overloaded), while all other servers remain idle. For this reason it is extremely important to distribute the load evenly across your system.

  • 5.1.2 Parallel Execution – Import Files

The technical layout of the migration requires the unique allocation of one sequential import file to one import run (for exceptions see unit 11.2). Therefore, it is essential to split the legacy system data ob- jects into the necessary number of import files before the import runs start. You can do this during the extraction process in the legacy system, or by an intermediate step. If you cannot split the legacy system data objects before the import (during extraction in legacy, for example), you can use a function in the Migration Workbench. This function is available on the import

screen using menu path Utilities Break down migration file. By entering a number of data objects per file and a generic name for the resulting files, the program splits the given import file into smaller files of identical sizes.

Performance analysis and runtime improvement of IS-U migration

  • 5.1.3 Parallel Execution – Access to Number Ranges

Nearly all migration objects access one or more number range intervals at least once per created data object (one business partner, for example). This results in an exclusive write lock on database table NRIV, which will only be removed at the end of the LUW (logical unit of work). A second access to this entry, originating from another work process, is given a wait status and only executed when the lock is released by the first process.

When using highly parallel execution, this leads to a bottleneck situation and finally to complete serial execution of the import.

The SAP R/3 system offers number range buffering as a remedy for this problem. This function cre- ates a buffer on every application server that acts as a local number range interval. An asynchronous process accesses the number range interval on the database and puts a certain number of entries to the main memory of the application server. Here the demand for a new number no longer results in a database lock since semaphores control access to the numbers. In this way, standby periods are ef- fectively avoided. There is a short delay only when the buffer is refilled (because all numbers are drawn) and an additional DIA work process is reading database table NRIV in a separate LUW.

Action:

You can use number range transaction SNRO to switch on buffering by entering the name of the number range object. We recommend that you use between 100 -1000 numbers, depending on how many parallel processes have simultaneous access to this buffer. When using a high number of paral- lel processes this value should be set to 1000. You should always use ‘buffering in main memory’ as the buffering method.

Note:

All numbers in the main memory are lost if the server is restarted. If you are required to enter numbers without gaps (in FI documents, for example) you cannot use this type of buffering. You should always discuss the procedure with the auditors, because the last used number of the range will not be the exact number of the master data after migration.

  • 5.2 Commit Buffering

A commit work usually concludes the LUW after one data object has been imported. Some migration objects, however, allow for an extension of the LUW to more than one data object.

This function is called commit buffering. When starting an import run, you can activate commit buffer- ing by entering a number of data objects to be included in one LUW. In a first test, this number should range from 20 to 100. The optimum setting has to be determined by the project for each allowed mi- gration object. This results in the execution time of the database statement Commit being distributed to n data objects, which in turn provides a better throughput.

Example:

A migration object has a throughput of 36.000 obj/hr. This means that on average, one data object is created in 100ms. If we assume that the average time needed for execution of one commit has a value of 30ms, the “real” creation time would only be 70ms. Therefore, an optimum throughput of 51.400 obj/hr, which is 42% higher than the original throughput, should theoretically be achievable. Realistically using commit buffering we can achieve a creation time of 75ms, which corresponds to a throughput of about 48.000 obj/hr, which is a 33% improvement.

In addition, because the migration control information (KSM table) is written to the database by array insert, the throughput is increased even more. Experience shows that, depending on the migration object, it is possible to achieve a throughput increase of about 30 – 80% for each job by using commit buffering. A further positive aspect is that calls to the Update processing (UPD) are written and proc- essed as a whole, which partially relieves the system and database from the load.

Note:

Since all database locks set by the process are held for a considerably longer time you must ensure that the involved number ranges are buffered, otherwise all but one import run will be given a wait status.

If you are using a highly parallel execution, you should not set too high a number as this could result in the database rollback segments being overloaded and jobs being cancelled by the system.

Analysis and Improvement of IS-U Migration Performance

  • 5.3 Switching-Off Functions

During the import, some actions are executed in the service modules that are not necessary during an initial data transfer. Examples are: writing of change documents, updating application statistics, trig- gering workflow events and checks that are only necessary during productive data processing.

  • 5.3.1 Switching-Off Functions – Change Documents

Looking at change documents from a functional point of view, they are not really necessary for first time creation of an object instance. Administration data that offers the name of the user and the date of creation is always available.

Looking at this process technically, we can see that it generates a high additional load on the system and the database. This is due to the fact that the change documents are usually written in asynchro- nous mode in UPD or UP2 work processes. The initiating process writes entries to database tables VBHDR, VBMOD and VBDATA. These entries are read by the UPD process after the initiating proc- ess triggers the end of the LUW (logical unit of work). The UPD process is then executed, the VB* table entries are deleted again and a commit work is triggered.

If a lot of parallel import runs request such asynchronous tasks, both database and application servers are additionally loaded with extra SQL processing and additional work process usage. Usually this results in dramatic throughput deterioration, which is only visible when importing data in parallel exe- cution.

Actions to switch off this function:

During migration, the creation of change documents is switched off for almost all of the master data objects.

For business objects belonging to other applications this is not always possible in the standard ship- ment. For these objects, modifications of the standard are available. These can be applied temporarily (only for the time migration is running). For more details see unit 7 and the examples given in the ap- pendix section.

  • 5.3.2 Switching-Off Functions – Statistics (Stock/Transaction Statistics, PMIS)

A typical characteristic of application statistics is that they represent a condensed reflection of the system's data, which is sorted and compressed according to several criteria and attributes. Technically they are implemented by just a few rows of a database table. This implies that the import runs should never update this information on a single-record basis, as this could result in database locks and wait situations.

Another reason why it is necessary to switch off this function is that the update is usually executed asynchronously using UPD or UP2 work processes. Because of this, the same considerations apply as described in the last unit.

Actions to switch off this function:

For most master data objects, the statistics update (stock and transaction statistics) is switched off during migration and no action is necessary. The only exception applies to the master data object device that requires a temporary modification (only for the time migration is running, see appendix A.5). This is a recommended action.

The update process for the object statistics in the PM component can be deactivated centrally in the IMG (see note 168960 for the procedure). This is highly recommended since this heavily affects the throughput of the migration of devices, connection objects, device locations and device installations.

You can usually recreate statistics effectively by running a separate mass process after the data im- port. As an example, you can use report REUSTAUF for PMI statistics.

For an update of the object statistics in PMIS (PM information system) you can use report RIPMS001 (see note 112841).

Performance analysis and runtime improvement of IS-U migration

  • 5.3.3 Switching-Off Functions – Workflow Events

Workflow events are usually not necessary during initial data transfer since they indicate processes that have already been handled in the legacy system (such as “there is a new business partner Smith“). Without deactivation, workflows could be started that should not be run this way.

It is also necessary to switch off this function because the update is usually executed asynchronously using UPD or UP2 work processes. Because of this, the same considerations apply as described in the last two units.

Actions to switch off this function:

The creation of events during migration is completely switched off for master data objects. For objects of other components this is not always possible. No modifications are currently available for temporary deactivation.

It is important that you switch off the event log (see note 46358) in order to reduce the number of RFC calls.

  • 5.4 Table Buffering

Access to database table entries requires a certain amount of time during which the requesting work process waits for the DBMS to execute the request and return the results. The typical time required for this action is about 10ms. If the entry is still available in the database buffers, this value is reduced to about 1ms.

The SAP system offers the possibility to locally buffer table entries on the application servers. The access time to these buffers is about 0,1ms. If 1 million data objects have to be imported and one particular table row is accessed once per data object, the required time is 1.000.000 * 10ms = 10.000 s = 2,7 h. If this entry exists in the SAP table buffers, the total access time is reduced to 100 s. If a large number of tables without buffering are read, the runtime saved by buffering can be considerable.

As a rule, customizing tables are already shipped by SAP to be buffered. During data migration, how- ever, it can be helpful and necessary to buffer some more tables.

Simple rule:

A table should be buffered, if:

It is accessed frequently

Only a small buffer space is required (normally < 1 MB)

Access is mainly executed by primary key (no secondary index in R/3 table buffers)

The table is never or only very rarely changed (change accesses < 1% of read accesses or even below < 0,1% for tables bigger than 1 MB)

In the context of migration, this means that master and transactional data (big/many changes) should never be buffered and that Customizing data (small/no changes during the load) should be fully buff- ered.

Exceptions apply to some master data tables that are similar to Customizing information. As an exam- ple, the table ETYP of device categories usually holds about 100-1000 records, which are never changed during the migration process but are heavily accessed (especially during device installation). If the buffer space allows it, this table should be fully buffered.

There should always be a critical examination if the performance gained by buffering tables is ruined by an overflowing of the buffers, resulting in a frequent physical reload of the buffer areas.

Actions to switch on table buffering:

Go to the dictionary definition of the table. Display the “Technical Settings”. There you can choose between the different buffering types. The recommended type is “fully buffered” unless the amount of data is too high and a partial buffering method would be preferable.

Analysis and Improvement of IS-U Migration Performance

  • 5.5 Switching-on local update task processing

Several migration objects write part of their data to the database using the update processing (UPD/UP2) despite switching-off all functions given above. Here the same considerations apply as described in the last units.

A possibility to reduce this load is to switch on the local update task processing, where the update function module is not called asynchronously but rather in the same work process as the caller during commit processing.

The main advantage of this setting is the reduction of all SQL activities regarding the writing, reading and deleting of the update task tables. Additionally there are less work processes involved.

When using the local update task mechanism the processing of the update requests is executed in a separate roll area, hence there is an explicit change of the roll area for every call to an update module. This additional roll-in/roll-out leads to a higher memory requirement, especially in highly parallel load. This can result in physical disk accesses slowing down the processes massively, whenever the re- quired storage is not available in main memory anymore. At the same time the runtime needed for one single data object will increase by the time needed for the table updates that are usually carried out in update processing. The result is a slightly lower throughput per job. But as the overall system load is lower and hence a higher number of jobs can be used this can be ignored.

Actions to switch off this function:

For switching-on the local update task processing see note 436715.

Please consider a comparison of the runtimes and system load before and after switching-on local update task processing since a possible performance improvement is highly dependent on the migra- tion object and system environment.

Performance analysis and runtime improvement of IS-U migration

  • 6 Influences of Hardware / Operating System / DBMS

    • 6.1 General Factors

This document cannot cover all aspects and possible tuning opportunities regarding the combination of hardware/operating system/DBMS. This would require recommendations for all possible configura- tions, which is beyond the scope of this document. It is, however, necessary to configure the system hardware “optimally”.

All considerations of this document are based on the assumption that when using higher parallel exe- cution there are no negative effects or even bottlenecks due to influences of the I/O system, the stor- age disks or the network.

Internal influences such as size of buffer and memory areas also have to be optimized in order to avoid buffers being swapped and memory areas being paged. Both situations put a high additional load on the CPUs, which is a waste of runtime and system resources.

  • 6.2 Optimum System and DBMS Settings

Considerations similar to those for system configuration exist for the database.

To configure your database you should consult the following documents in the IS-U Technical Informa- tion Center (TIC):

'Fundamentals of Database Layout' 'Database Layout for R/3 Installations with Oracle' 'Database Layout for R/3 Installations with Informix' TIC can be accessed via SAPNet (http://service.sap.com/) under alias isu_tic.

More information regarding sizing, database layout, and system parameters is also available in SAP- Net (under alias performance).

  • 6.3 Cost-Based Optimizer

A database management system (DBMS) accesses data following a strategy, which is determined by the optimizer. The main goal of the optimizer is to find the most effective way of accessing the data requested by an SQL statement. This access strategy used during execution of an SQL statement depends on information such as:

The accessed table (or in the case of view/join accesses all tables involved) The fields used in the WHERE clause of the SQL statement Indexes available for the accessed table One optimizer operating with this information is called the Rule-Based Optimizer (RBO).

As of R/3 Release 4.0, all DBMS that can be operated together with R/3 use the Cost-Based Opti- mizer (CBO). This program calculates the costs of different access strategies and chooses the most effective one. To calculate the access costs of one particular strategy, the CBO needs statistical in- formation about the database tables and their indexes such as:

Number of table (and index) rows and number of allocated table blocks (and indexes) Number of different values for each table column

Analysis and Improvement of IS-U Migration Performance

Before starting the initial load, the statistical information does not yet exist or rather it has the value 0. Therefore, it is not possible to perform a cost evaluation for the different strategies and the CBO de- termines a non-optimal access strategy.

With the statistical information showing that there are no entries in the table, and all existing indexes are of poor quality, the optimum strategy would be a “full table scan” (meaning a fully sequential ac- cess). For the first records to be read from the table, this does not cause a problem. However, with the gradually increasing gap between real situation and statistical information, the execution plan contin- ues to worsen. This results in a rapid increase of access times and a dramatic deterioration of the throughput.

In such a situation, the time necessary for executing one single SQL statement increases to 1-2 sec- onds. This results in the throughput dropping to values of 1.000 obj/hr (or below). A typical example is database view V_EGER_H when loading device installations with migration object INST_MGMT or INSTALL.

In order for the CBO to always calculate the optimum access path, it is essential that all database statistics for the involved tables are up-to-date.

During migration it does not matter if the numbers in the statistics are exact. It is more important that the statistical information “table is not empty” exists, representing a certain value distribution. Some- times even an opposite strategy can help. In the case of Oracle, for example, explicitly deleting the statistical information of a table forces the RBO to be used instead of the CBO.

Action:

When testing the data import, you should watch for a sudden drop in throughput and immediately exe- cute a performance trace using transaction ST05. In this list you can easily detect problematic SQL statements (highlighted in red). Using the “Explain” function, you can display the execution plan on the database. If it is clear that an existing index is not used in this execution plan, you must update the database statistics for the involved table(s).

You can use transaction DB20 to update the statistics (documentation is available in the system documentation). It is usually sufficient to execute the update with low accuracy, which only analyzes a sample of the table rows. In exceptional cases the update with low accuracy (complete and exact analysis of all table rows) must be used. For some DBMS you can even start the statistics update in the EXPLAIN function of the performance trace list. This action can usually be executed while import runs are still active.

If, however, the target system already contains a client with data, you only have to update the statis- tics once before loading, so that no more problems related to bad access strategy occur.

Performance analysis and runtime improvement of IS-U migration

  • 7 Hints for Particular Migration Objects

This unit deals with possible improvements for those migration objects that usually have the highest volume of data. Listed are possible and recommended actions that are necessary for parallel execu- tion or that lead to an improvement in performance.

Please also read the migration documentation for each migration object, as further interesting informa- tion could be available (for example, the database tables normally updated during import).

7.1

All Migration Objects

For optimum performance of all migration objects, proceed as follows:

Check if note 752943 (“Performance: Update migration statistics”) is applied.

Check if note 713659 (“IS migration: Performance increase with access to KSM”) is applied.

Check if note 759426 (“Commit buffering does not perform with migration”) is applied.

7.2

DEVICE

For optimum performance of migration object DEVICE proceed as follows:

Buffer number range EQUIP_NR, ILOA and OBJNR (necessary for parallel execution)

Switch off PMIS updating (recommended)

Use commit buffering (migration documentation up to Release 4.63 is incorrect)

Insert modification M5 (recommended)

Switch off change documents and workflow events for equipment category I (IS-U devices) as described in note 481199.

7.3

CONNOBJ

For optimum performance of migration object CONNOBJ proceed as follows:

Buffer number range ISU_EHAU, ADRNR, ADRV, ILOA and OBJNR (necessary for parallel exe- cution)

Switch off PMIS updating (recommended)

Apply modification M1 (recommended)

Update database statistics for regional structure tables, for example, tables ADRSTREET and ADRSTREETT that build the view V_ADRSTRT

Switch off table buffering EHAUISU

Switch off change documents (note 489911)

Check if note 730548 (“Change log not deactivated completely”) is applied

Check if note 805551 (“Change documents address data”) is applied

Check if note 612815 (“Switching off geolocation / geocoding”) can be applied

7.4

PREMISE

For optimum performance of migration object PREMISE proceed as follows:

Buffer number range ISU_EVBS (necessary for parallel execution)

Use commit buffering

7.5

DEVLOC

For optimum performance of migration object DEVLOC proceed as follows:

Buffer number range ISU_EHAU and OBJNR (necessary for parallel execution)

Switch off PMIS updating (recommended)

Update database statistics for tables IFLOT and ILOA of view IFLO

Analysis and Improvement of IS-U Migration Performance

7.6

INSTLN

For optimum performance of migration object INSTLN proceed as follows:

Buffer number range ISU_EANL (necessary for parallel execution)

Use commit buffering

Check if note 733062 („Point of delivery transaction: No explicit COMMIT “) is applied

Buffer table EGRID

Check if note 771560 (“Peformance improvement w/ creation of installations”)

Check if note 771843 (“ES30: Performance improvement during creation of installation”)

7.7

PARTNER

For optimum performance of migration object PARTNER proceed as follows:

Buffer number range BU_PARTNER, ADRNR and ADRV (necessary for parallel execution)

Use commit buffering

Apply modification M1 (recommended)

Apply modification M2 (recommended)

Apply modification M3 (recommended)

Check if note 735229 (“Inperfomant accesses within 'ISU_Partner_Memory_Get' “) is applied

Check if note 752926 (“Unnecessary accesses on table EKUN“) is applied

Check if note 760709 (“Change Document for Tax Number not supressed “) is applied

Check if note 720223 (“Change documents business partner”) is applied

Check if note 713101 (“Performance problem during check of form of address”) is applied

Check if note 771952 (“Change documents cannot be deactivated in DI”) is applied

Check if note 805551 (“Change documents address data”) is applied

Check if note 612815 (“Switching off geolocation / geocoding”) can be applied

Buffer table TBE11

If a SAP R/3 plug-in is installed: Check if note 481626 (“Data transfer optimization BP without CRM”) can be applied and activities as described in note 489684 (“Performance PARTNER (De- activation of CRM Middleware and APO)”) are necessary.

7.8

ACCOUNT / ACCOUNT2

For optimum performance of migration object ACCOUNT proceed as follows:

Buffer number range FKK_KONTO (necessary for parallel execution)

Use commit buffering

Buffer tables TE635, TFK070B and TFK070C and TFK033D

Update database statistics of table DFKKLOCKS

Apply modification M4 (recommended)

Check if note 759574 (“Performance improvements for selects of the contract account“) is applied

Check if note 675576 (“High response times with device installation”) is applied

7.9

INST_MGMT/INSTALL

For optimum performance of migration object INST_MGMT (or INSTALL) proceed as follows:

Buffer number ranges ISU_EABL, ISU_LOGINR, ISU_LOGIZW, ILOA, ADRV and OBJNR (nec-

essary for parallel execution) Switch off PMIS updating (recommended, effective only with technical or complete installation)

Check if note 481199 (“Switch off change documents and workflow events for equipment category I (IS-U devices)”) is applied.

Performance analysis and runtime improvement of IS-U migration

Buffer tables TE408, TE115, TE410S, TE420, TE422, TE431, TE431T, TE493, TE551, TE669,

TE669T, TE685 (partly shipped as standard setting in later releases) Buffer tables ETYP, EZWG, EZWG_HEAD, EZWGEASTIH, EWIK

Update database statistics of tables EQUI, EGERS and EGERH (before importing data)

Update database statistics of tables EABL, EABLG (after some 100 objects are imported). Use method COMPUTE (full analysis). If you are using DBMS Oracle and still full table scans are exe- cuted check if the note170989 “Poor performance under Oracle 8044, 805*, 8.0.6 “ applies to the situation

Update database statistics of tables ETDZ, EADZ, EZUZ, EZUG and others (execute a perform- ance trace after some 100 objects, to find out whether more tables exist where this action is nec- essary)

Apply modification M5 (recommended)

Apply modification M6 (recommended)

Apply modification M7 (recommended when installing gas devices)

Apply event from note 400745 (“Lock table overflow“) if modification M5 is applied

Check if note 445399 (“Control parameter IS-U Migration”) can be applied

Check if note 753811 (“Performance task for migration”) is applied

Check whether performance notes regarding migration object METERREAD are applied

 

7.10

MOVE_IN

For optimum performance of migration object MOVE_IN proceed as follows:

Buffer number range ISU_EVER and ISU_EEIN (necessary for parallel execution)

Use commit buffering

Buffer tables BCONTCFIND and EVER_CRMQ

Check if note 771607 (“Move-in/Migration: long runtimes”) is applied

Creating the search index for premises and partners should be deactivated in Customizing (see note 379817 “Performance of migration object MOVE_IN“ for details)

7.11

DOCUMENT

For optimum performance of migration object DOCUMENT proceed as follows:

Do not use number range buffering, as mass processing in FI-CA is used

Create a sufficient amount of number range intervals and assign them to the used document type in Customizing

Use commit buffering

Check if note 775917 (“Performance migration FACTS, DOCUMENT, PAYMENT”) is applied

Check if note 687967 (“Increase of internal tables for reading transactions”) is applied

7.12

PAYMENT

For optimum performance of migration object PAYMENT proceed as follows:

Do not use number range buffering, as mass processing in FI-CA is used

Create a sufficient amount of number range intervals and assign them to the used document type in Customizing

Commit buffering Please note: Every document to be settled may only be settled or partly settled once within the same import run and LUW. We recommend that you do not import payments for one business partner in different parallel runs, otherwise lock problems can occur. These locks are only re- leased by the Commit Work statement. If consecutive documents do not often settle the same original posting, you can use a low value – say 10 – for buffering. You can recognize problems that originate from these restrictions by a high number of error messages “completing processing.“

Analysis and Improvement of IS-U Migration Performance

Configure additional DIA work processes for the central instance. Ensure that the number is at least equal to the number of parallel running import jobs. RFC calls are always triggered to the central instance because the import runs read information from the server. If no idle DIA work pro- cesses are found, the status of the import job changes to “waiting - CPIC“ and only continues when another DIA work process is available. Alternatively you could run the import on a really powerful central instance instead of on the application servers. This allows you to avoid the over- head triggered by the RFC calls.

Check if note 775917 (“Performance migration FACTS, DOCUMENT, PAYMENT”) is applied

7.13

BBP_MULT

For optimum performance of migration object BBP_MULT proceed as follows:

Don’t simply buffer the used number range FKK_BELEG as this affects financial documents. When you restart the server or when you switch off buffering again, numbers will definitely be lost. This has to be discussed in the project (and with the auditors) and agreed before use

 

=> If allowed: only buffer number range FKK_BELEG for the time of import with migration object BBP_MULT (at a low rate: 10-20)

7.14

DEVICERATE

For optimum performance of migration object DEVICERATE proceed as follows:

Buffer tables TE431, TE431T

 

7.15

PROPERTY

For optimum performance of migration object PROPERTY proceed as follows:

Check if note 629891 (“Owner: BOR events triggered repeatedly “) is applied

Apply event from note 400745 (“Lock table overflow“)

 

7.16

NOTE_CON/NOTE_DLC

For optimum performance of migration object NOTE_CON (and NOTE_DLC), proceed as follows:

Check if note 405836 (“Low-speed database access to table ENOTE“) is applied

7.17

RIVA migration

For optimum performance of migration objects DUN_BBP, DUN_INS, DUN_DOC, DUN_SEC and INT_BBP used in the RIVA migration, proceed as follows:

Check if note 394214 (“Migration object DUN_BBP,

...

(company SAPRE) performance“) is applied

7.18

FACTS

For optimum performance of migration object FACTS proceed as follows:

Import file

 

The best throughput can be achieved in transferring all facts of one installation in one data ob- ject (one unique oldkey).

Buffer table TE221

Check if note 771560 (“Performance improvement w/ creation of installation”) is applied

Check if note 771845 (“ES30: Performance during change of historical inst.”) is applied

Check if note 772261 (“ES30: Performance when migrating installation facts”) is applied

Check if note 775917 (“Performance migration FACTS, DOCUMENT, PAYMENT”) is applied

Check if note 776162 (“RTP: Performance improvement w/ installation fact”) is applied

7.19

SECURITY

For optimum performance of migration object SECURITY proceed as follows:

Performance analysis and runtime improvement of IS-U migration

Buffer number range FKK_SEC (necessary for parallel execution)

Use commit buffering

Check if note 488857 (“Performance improvement for migration of cash securty deposits”) and note 581816 („Number range intervals for mass activity”) are applied

Check if note 756525 (“Change documents w/ migration of cash security” is applied

Check if note 499935 („Index auf FKK_SEC_REQ“) is applied

7.20

DEVGRP

For optimum performance of migration object DEVGRP, proceed as follows:

Buffer number range ISU_DEVGRP (necessary for parallel execution)

7.21

METERREAD

For optimum performance of migration object METERREAD proceed as follows:

Import file

 

The best throughput can be achieved in transferring all meter readings of one device and the same meter reading date in one data object (one unique oldkey). Note 587154 should be ap- plied.

Use commit buffering

Buffer tables TE408, TE115, TE410S, TE420, TE422

Check if note 562714 (“Meter reading order creation performance measure“) is applied

Check if note 587154 (“Performance migration MR results “) is applied

Check if note 602338 (“Migration of meter reading results; validations “) is applied

Check if note 677592 (“Needless read check EABL/EABLG with MR order “) is applied

Check if note 722028 (“Performance with migration of meter reading results “) is applied

7.22

CONSUMPT

For optimum performance of migration object CONSUMPT proceed as follows:

Import file

 

The best throughput can be achieved in transferring all period consumptions of one device in one data object (one unique oldkey). As an alternative it might be considered transferring the period consumptions of one register only in one data object. The error handling is simplified even this approach leads to a reduced performance.

Use commit buffering

Check if note 604346 (“Migration - Tuning Consumption”) is applied.

7.23

POD

For optimum performance of migration object POD proceed as follows:

Check if note 594041 (“PoD migration: Overflow of disconnection entries”) is applied

7.24

DEVICEMOD

For optimum performance of migration object DEVICEMOD proceed as follows:

Check if note 781288 (“Dump SYSTEM_IMODE_TOO_LARGE”) is applied

7.25

DEVINFOREC

For optimum performance of migration object DEVINFOREC proceed as follows:

Check if note 781288 (“Dump SYSTEM_IMODE_TOO_LARGE”) is applied

Analysis and Improvement of IS-U Migration Performance

  • 8 Migration Control Parameter

    • 8.1 Basic Comments

To achieve a better performance checks can be switched off. Caution: Switching off standard checks may lead to inconsistencies in the application and in the database!

The migration control parameters are accessable via the menu IS Migration - Settings – Customer settings. The parameter can be set up on migration company or migration object level. Please check the documention in the workbench meticulously before changing the delivered settings.

  • 8.2 Parameter INST_BILLPER_OFF

The parameter INST_BILLPER_OFF controls the checks of already billed periods during the migration of device installation/removal/replacement. This parameter allows changes of the installation structure in periods that have already been billed.

Check note 449827

  • 8.3 Parameter INST_CERT_CHECK_OFF

The parameter INST_CERT_CHECK_OFF controls the certification of devices during the migration of device installation. This parameter allows to avoid checks for the next replacement year during device installation in order to be able to display historic processes correctly.

Check note 314579

  • 8.4 Parameter INST_CERT_LOSS_OFF

The parameter INST_CERT_LOSS_OFF controls the certification of devices during the migration of device removal. This parameter is designed to avoid the loss of the certification during the removal of devices (without considering the settings in the customizing) in order to be able to display historic processes correctly.

Check note 314579

  • 8.5 Parameter INST_CONS_BILL_OFF

The parameter INST_CONS_OBJ_OFF controls the processing of the period consumption during the migration of billing related device installation. This parameter deactivates the processing of the period consumption object.

Check note 445399

  • 8.6 Parameter INST_CONS_OBJ_OFF

The parameter INST_CONS_OBJ_OFF controls the processing of the period consumption during the migration of device installation. This parameter deactivates the processing of the period consumption object. The period consumption can be migrated separately with the migration object CONSUMPT.

Check note 314579

  • 8.7 Parameter INST_DISMANT_CH_OFF

The parameter INST_DISMANT_CH_OFF controls the checks of disconnections during the migration of device installation/removal/replacment. This parameter deactivates the checks whether a device is disconnected in order to be able to work with devices regardless of the assigned disconnection status.

Check note 445399

  • 8.8 Parameter INST_POD_OBJ_OFF

The parameter INST_POD_OBJ_OFF controls the processing of point of delivery during the migration of installation changes. This parameter deactivates the processing of the POD when an installation is changed.

Performance analysis and runtime improvement of IS-U migration

Check note 508222

  • 8.9 Parameter INST_POD_PRO_CH_OFF

The parameter INST_POD_PRO_CH_OFF controls the checks of profile and POD allocations during the migration of device installation. This parameter deactivates the checks of existing profile and POD allocations.

Check note 445399

  • 8.10 Parameter INST_PREDEF_EAST_OFF

The parameter INST_PREDEF_EAST_OFF controls the processing of predefined register relationship during the migration of device installation. This parameter deactivates the consideration of predefined register relationship. The register relationship can be migrated separately with the migration object REGRELSHIP.

Check note 445399

  • 8.11 Parameter MOI_POD_SERV_CH_OFF

The parameter MOI_POD_SERV_CH_OFF controls the processing of non-billable POD services dur- ing the migration of move-in. This parameter deactivates the proposal logic of POD services and no checks for the POD services occur.

Check note 508222

  • 8.12 Parameter READ_BILLPER_OFF

The parameter READ_BILLPER_OFF controls the checks of already billed periods during the migra- tion of meter readings. This parameter allows changes of meter readings in periods that have already been billed.

Check note 449827

  • 8.13 Parameter POD_BILLPER_OFF

The parameter POD_BILLPER_OFF controls the checks of already billed periods during the migration of meter readings. This parameter allows changes of POD in periods that have already been billed.

Check note 656893

Analysis and Improvement of IS-U Migration Performance

  • 9 Organizational Measures

    • 9.1 Pre-Migration of Migration Objects Before Go-Live

Some business objects could be migrated into the R/3 system before the majority of the data (the weekend before going live, for example). This mainly includes the transfer of regional structure data and device master data, which is unlikely to be changed during the week before going live.

All data objects migrated earlier have to be synchronized with the legacy system as the productive system until all data is migrated. During this time interval (double maintenance or synchronizing stage), all relevant changes in the legacy must be monitored and imported using some suitable syn- chronization mechanism.

  • 9.1.1 Regional Structure Data

Regional structure data plays more of a Customizing role than master data. It is usually available in a consolidated or final form long before going live.

This strategy can be applied whenever the regional structure data is not deducted from the legacy system but is supplied by some external resource (such as postal service, government/administration bureaus or commercial suppliers).

When generating this data from the legacy system data, it is likely that changes will occur (in an ad- dress, for example) that will lead to a need for synchronization during double maintenance.

Because the amount of regional structure data is usually small, only a small reduction of overall run- time can be achieved.

  • 9.1.2 Device Master Data 2

Usually all changes relevant to devices relate to the physical location where they are installed. There- fore, the creation of the master data device is not crucial. The main changes that can occur during double maintenance are newly delivered devices that can be imported in an additional (small) migra- tion step during the going live stage.

Because of the comparably high throughput of migration object DEVICE, only a small reduction of overall runtime can be achieved.

  • 9.1.3 Technical Master Data

As an extension to this approach, some technical master data (connection objects, premises, device locations or possibly even installations) could be migrated beforehand.

This, however, requires you to install a mechanism (such as BDC) that synchronizes all changes oc- curring in the legacy system during double maintenance. Installations in particular are in need of per- manent synchronization since the billing attributes (rate data, date fields) assigned to them are usually subject to high fluctuation.

The other technical business objects relate closely to real physical objects that can only change in certain rare situations and only when a small number of information fields are available for change.

This approach is not possible if the premises and installations are modeled on the legacy system data, because the data would require a complete comparison during productive migration.

  • 2 This is not generally applicable for a RIVA migration since the exact historical data can change due to online processing in RIVA (in reversals).

Performance analysis and runtime improvement of IS-U migration

  • 9.2 Optimization of Parallel Actions

As already mentioned, all migration objects are suited for parallel execution. This creates more possi- bilities to minimize runtime.

Instead of importing data for only one migration object and having a fixed sequence of migration ob- jects, you can develop a more complex approach during the performance tests. You could, for in- stance, import the technical master data (CONNOBJ, PREMISE, DEVLOC, INSTLN and DEVICE) consecutively, or you could use some work processes to import the devices and simultaneously import the technical master data in some other work processes. Another option is to use one server for load- ing devices and the other exclusively for technical master data. In this way, you can minimize the run- time required until you can install the devices.

The process of device installation is usually the most critical point in the migration sequence as this object only runs at a comparably low throughput. Therefore, every project must reach this process as quickly as possible. However, because there are only a few direct dependencies between device in- stallation and other migration objects, you could consider loading the INST_MGMT object in parallel to the business partners (PARTNER), contract accounts (ACCOUNT) and other dependent data.

  • 9.3 Reduction of Data Volume

The standard strategy of IS-U Migration covers all data of one billable data entity back to its last leg- acy bill. In the case of annual billing, an average history of 6 months is migrated. In principle it is pos- sible to use a longer time period for some of the data.

To ensure a comparably small downtime of the production system, you must restrict the total amount of data transferred. Apart from all required master data and data required for future billing, the extent of transactional data history should be kept to a minimum.

This includes objects such as billing documents (line items) or an extended CA document history (payment history). If you choose to deviate from the standard strategy in these functional areas, you have to accept a considerably higher downtime, because the data has to be fully imported before the system can be released for production. It is not possible to load this data after releasing the system for production.

At the same time, the database tables containing the transactional data will show a higher number of entries. You must consider this when performing the disc sizing. If, for instance, you want to transfer two years of history, you have to add double the amount of the annual growth of your database to the initial size determined by the sizing.

The transfer of an extended history can also have an effect on the productive use of the system. This is because some SQL statements have a larger hit list than others, causing certain functions to run slower (especially reports over large data segments).

  • 9.4 Size of Import Files

Some migration objects tend to show a decreasing throughput (usually because of growing memory demand). If you start import runs with files containing more than 50.000 data objects, the average throughput will be considerably lower than that measured for smaller files. We recommend that you keep file sizes to a maximum of 20.000 – 30.000 data objects because:

This size effectively avoids the described problem

The more files that are available, the more parallel import runs you can start and the better you can react to changed system loads and idle work processes

If this split of import files cannot be executed beforehand, an additional utility function is available on the import screen under menu option Utilities –> Break down Migration file. Given a large number of import files, you can use the utility function under menu option Utilities –> Schedule Import Runs to start more than one job simultaneously on several servers, instead of starting each file manually.

Analysis and Improvement of IS-U Migration Performance

  • 9.5 Restart Option

If an import run is terminated before reaching the end of the file, it can be restarted. Key and Status Management (KSM) prevents data objects that have already been imported from being loaded. For this, every data object in the file must have a legacy system key (or Oldkey), which is checked against the KSM during import.

If a regular import run finds an Oldkey in the file that is already contained in the KSM, an error mes- sage is issued (EM-101 “Legacy system key XXX has already been migrated“).

If you expect a run to find a lot of data objects that have already been migrated, you should always use the “Restart” option. This option causes the load program to suppress message EM-101. Another line is displayed in the migration statistics that shows the number of already migrated data objects. Moreover, already migrated data objects no longer count as an error.

If you do not use this option, it can lead to memory exhaustion since the error log is stored internally until the very end of the import run. In addition, the analysis of the error messages is greatly simplified when these unnecessary messages are not issued.

Performance analysis and runtime improvement of IS-U migration

10 Performance Testing

10.1

Load Tests in the Migration Project

Every migration project has to execute high volume load tests as well as the obligatory small size test- ing for Customizing and checking the migration objects.

 

These tests provide:

Results for optimization of migration performance

Information about the final size of tablespaces after migration, and about optimization of the data- base layout

Information about the quality of the migration (number and quality of the errors as well as integrity of the imported data)

You can also set up a test system with original data to execute the required acceptance and stress tests, as well as for end-user training. At some stage in every project, an acceptance test and possibly a stress test must be executed. The project plan should also always contain a migration and a per- formance test.

 

SAP recommends:

It is mandatory to carry out at least one complete test migration before productive data mi- gration

You are strongly recommended to execute more than one full-size migration test

These tests should use the final production-like hardware configuration. If this is not feasible, you must consider how the results of the tests can be extrapolated to the final environment

In a final test you have to determine the strategy for optimum usage of the system resources Only a strategy of this kind can prevent performance problems during productive migration.

10.2

Performing a Performance Test

As already mentioned, both the throughput of a migration object and the total runtime of the data transfer can be positively influenced by many different measures. To ensure optimum performance for your project, you have to test the parallel execution of the import very thoroughly before going live.

During all these tests you must always pay attention to the whole system (application servers, data- base, network, etc.) using suitable monitors at operating system level. In this way you can detect bot- tlenecks and resource shortages that only appear in heavy parallel processing.

In most cases, this implies that the members of the migration team alone cannot execute performance testing of migration. It is also necessary to involve system and database administration.

The optimum number of parallel jobs during import is not predictable and cannot simply be calculated. You can, however, approach this optimal job distribution using the strategy described in the following units. Determination of this value depends greatly on the project conditions and will therefore always be a very iterative process.

10.2.1 Single Mode Test

You can reach the approximate throughput of a migration object by using a single import run with a sufficient number of data objects. For instance, running 5.000 data objects will provide a stable figure for the throughput of a migration object. By applying all the hints given in unit 7 you can gradually im- prove this figure to reach an “optimum” for a single run of the current migration objects.

Analysis and Improvement of IS-U Migration Performance

  • 10.2.2 Simple Parallel Execution Test

After recording these figures, depending on the size of your project and the given hardware, you can start a small amount of import jobs (for example 4 jobs each loading 5.000 data objects). If your sys- tem is not overloaded with 4 jobs, you can use the single job result to estimate whether and how the import process for these migration objects is scalable. This enables you to calculate an initial figure for the possible total throughput.

This test has to be carried out on hardware reserved exclusively for the import runs. If you collect the data in a test or development system, you should run the jobs at night when no other dialog users are logged on.

The recommendations given in unit 7 should already have been applied and their effectiveness proven with single run tests. If these actions have not been applied, a serial execution could occur (see unit 5) and all results of this test rendered worthless.

  • 10.2.3 Import with Maximum Load

Now all is prepared for a test with full load on a suited system environment. Again you should start with a specified number of parallel import runs. A recommendation for this initial number is outlined below.

If you are using a Central System (one system serving both database and application, Two Tier Con- figuration) you should begin the test with one import run per CPU available (or slightly lower).

If database and all application servers reside on one single system (Three Tier Configuration), you can use 1-1.5 times the number of CPUs as the number of jobs on each application server. At this point, do not start any jobs on the central instance.

Using the appropriate system monitors (such as ST06 and SM50 or monitors on operating system level), check how high the system load is (figures such as CPU load, memory consumption, number of DB locks, number of enqueues, I/O rates) or if there is a bottleneck in system resources. After this analysis you must decide whether adding more parallel jobs would lead to further improvements in total throughput.

If possible gradually add more and more jobs until either the database load exceeds the acceptable value (80-85%) or until there is no more increase in the total throughput Σ d. This number of jobs rep- resents the maximum that you can run at the same time for the given migration object and the given system environment.

At this stage you can additionally experiment with changes in the settings of the commit buffering (see unit 5.2), the table buffering (see chapter 5.4) or the parallel execution of different migration objects (see unit 9.2).

The object of this effort is to get an overview of the optimum total runtime (or optimum total through- put) per migration object and the settings used during this “optimum import”.

  • 10.2.4 Full Test Migration (Simulation of Production Load)

All of the information collected in the performance tests must be adapted when the final simulation of the productive load takes place. You should use all possible settings, all necessary modifications and all monitoring functions so that you can establish an exact final schedule for the productive migration. During the final test, all of the important figures are monitored in order to confirm the expected time requirements and whether any resources remain unused.

This implies that you have to record exactly when each action must start, when it is supposed to end and what actions are required if a process is taking too long.

Performance analysis and runtime improvement of IS-U migration

11 Other Functions in Releases 4.62 and 4.63

 

11.1

Job Scheduler in Release 4.62

In Release 4.62, the Job Scheduler provides an alternative way to start import runs. This function can only be used to start import runs as background jobs.

 

You can:

Link together import runs (start import of file y when import of file x is finished)

Split import files and start the import runs for them simultaneously in n parallel work processes

Start an import run with n job steps (even for different migration objects)

Schedule import runs to wait for a certain background processing event to be triggered

Trigger background processing event and start one or more scheduled import runs

11.2

Distributed Import in Release 4.63

In Release 4.63, Distributed Import provides an alternative way to start import runs. This function can only be used to start import runs as background jobs.

This technique allows you to distribute an import run (one import file) onto a set of work processes (application servers). You can also change this distribution for an already running distributed import. In this way, the import of one file can be distributed and controlled so that the system resources are util- ized in the best possible way. For this purpose, the import file is split into many small import files (par- tial files) that are started gradually as import runs in background jobs.

When you define a distributed import, you have to enter the name of the import file, the file size for the partial files and an initial distribution of the import runs on the application servers. When the distributed import begins, a program is started (called Master Job or Master) that is responsible for monitoring and scheduling the partial jobs. When one partial job is finished, a new partial job is scheduled on this now idle work process. This continues until all partial files are imported.

The distribution used in a distributed import can be changed even when the import is already running. In this way, you can reduce the load on one server if it is overloaded and redistribute the load by in- creasing the number on another server. The implementation of this change is executed by the master job, which either schedules more jobs on a server or simply stops scheduling jobs for one particular server until the new distribution is in place.

In addition, you can stop a distributed import in a controlled way. The master does not start any more jobs and simply waits for the running partial jobs to finish. A distributed import with the status “stopped” can be completed later.

The master job collects the migration statistics records of all involved partial jobs and offers an easier way of tracking the errors.

Analysis and Improvement of IS-U Migration Performance

12 Further Reading

This document is not intended to cover all areas of the topic “Performance and Migration“. Here is a list of other sources that offer information about this or related topics.

OSS notes:

  • - Notes regarding migration are listed under component IS-U-TO-MI. Notes dealing with per- formance issues especially are categorized or contain the search term ‘performance’

  • - Collective note (note number 135937) deals with “performance“, which is updated regularly SAPNet (http://service.sap.com/) offers

  • - Use alias performance for general information about performance issues

  • - Use alias isu_tic for IS-U/CCS and topics such as sizing, DB layout, performance, and migra- tion

IS Migration Workbench

  • - The migration documentation of each single migration object contains extra information about involved number ranges, database tables updated during import and the use of commit buffer- ing

System documentation

  • - In the area BC Basis there is more information regarding performance, system monitoring, and DB administration amongst others

Training

  • - BC305: Advanced System Administration

  • - BC315: Workload Analysis

  • - BC490: ABAP Performance

  • - BC5xx: Database Administration

  • - IUTW90 IS Migration Workbench

Performance analysis and runtime improvement of IS-U migration

13 Conclusion

All considerations in this document were investigated very carefully. The given actions for analyzing and improving performance were documented as precisely as possible.

Despite this, it should be fairly obvious that “performance“ is no exact science that allows us to postu- late final formulas and instructions or where it is possible to hand over a universal remedy for perform- ance problems. This paper should rather be considered as a guide to help you to achieve your “own” much more precise analysis of all migration objects used for your own migration project. Reaching your individual “optimum performance” is only achievable by intensive testing and analysis in the cus- tomer environment. It cannot simply be imported like a piece of code.

The “performance“ of a load process depends on many more factors than we could describe within the limited scope of this document. The import of a particular migration object differs from project to pro- ject, mainly because different Customizing settings and different data constellations are involved. Us- ing customer enhancements or country-specific changes can lead to quite a different basis on which the performance analysis is executed.

In addition, technical circumstances (structure and configuration of both hardware and DBMS used, I/O system layout, number, size and speed of the used disks, RAID level used, and so on) have to be considered and analyzed to find the optimum settings. As a last factor, organizational influences, such as allowed downtime of the system, can lead to changed conditions.

We have tried to describe and analyze the most important factors (and perhaps the ones that are easiest to reach) and how they are manipulated. This should provide support to the migration projects whenever data load performance issues and ways to optimize migration runtime are discussed.

For feedback or comments on this document please contact:

Friedrich Keller SAP AG PO Box 1461 69185 Walldorf, Germany E-mail: friedrich.keller@sap.com

Analysis and Improvement of IS-U Migration Performance

A.

Appendix

The following sections contain program modifications that have proved to lead to a considerably higher throughput of particular migration objects. All of them have been used in projects dealing with high numbers of data objects. These code changes are not part of the standard shipment of the in- cluded components. They have to be adapted as (temporary) code modifications in the customer sys- tem.

All modifications are designed in such a way that they do not affect the corresponding dialog functions or other application processes. They are only effective when the code is called within a migration im- port run.

A.1

Modification M1: Switching-Off Change Documents in Central Address Management

Include LSZA0F42

Note

459434 (in IS-U/CCS releases up to 4.63)

Effect of this modification:

This code change improves the performance when creating address data.

The migration objects for creating connection objects and business partners benefit from this change, especially in parallel execution.

Omitting a high number of calls to update work processes considerably reduces the overall system load.

A.2

Modification M2: Switching-Off Change Documents for SD Customers

Include LV02DF02

Note

459489 (in IS_U/CCS releases up to 4.63)

720223 (no modification necessary since release 4.64)

Effect of this modification:

This code change improves performance when creating business partners, if customers are created simultaneously in component SD (usage of field MUSTER_KUN in structure INIT).

Omitting a high number of calls to update work processes considerably reduces the overall system load.

A.3

Modification M3: Switching-Off Change Documents for BP Tax Numbers

Function module

FTX_BUPA_TAXNUM_SAVE (in IS-U/CCS releases up to 4.61) or function module BUTX_BUPA_TAXNUM_SAVE (in IS-U/CCS releases 4.62 and up)

Note

459497 and 760709 (no modification necessary since release 4.71)

Effect of this modification:

This code change improves performance when creating business partners, if tax numbers are created simultaneously (usage of structure TAXNUM in object PARTNER).

Omitting a high number of calls to update work processes considerably reduces the overall system load.

A.4

Modification M4: Switching-Off Change Documents for Contract Ac- counts

Function module

FKK_ACCOUNT_UPDATE

Note

459461 (no modification necessary since release 4.64)

Performance analysis and runtime improvement of IS-U migration

Effect of this modification:

This code change improves performance when creating contract accounts.

Omitting a high number of calls to update work processes considerably reduces the overall system load.

A.5

Modification M5: Switching-Off Stock Statistics for Devices

Include LE10NF01

Note

458028

Effect of this modification:

This code change improves performance when creating or installing devices.

Omitting a high number of calls to update work processes considerably reduces the overall system load.

A.6

Modification M6: Switching-Off Hierarchy Checks During Installation of Equipment

Function module

PM_HIERARCHY_CALL

Note

494471

Effect of this modification:

This code change improves performance when installing devices, because omitting Roll-In/Roll-Out processes reduces the overall system load. The achievable improvement is highly dependent on the configuration of system and parameters for memory management.

A.7

Modification M7: Switching-Off ‚Bypassing Buffer‘ on Installation of De- vices of Gas Division

Function module ISU_THGVER_VAL_CHECK.

Here the IMPORTING parameter X_ACTUAL should be defaulted to the value SPACE instead of the shipped value ‚X‘. With this change, all accesses to Customizing tables in gas billing are no longer executed with the option “bypassing buffer”, but can be read from the table buffer.

Effect of this modification:

This code change improves performance only when installing devices of the gas division.

Analysis and Improvement of IS-U Migration Performance

A.8

Typical Throughput Figures

These throughput figures are approximate per import job, for the most important migration objects. All results originate from productive migration or tests under production-like conditions. Given here throughput figures for one single job measured under maximum parallel execution. However, some of these figures cannot be achieved in all projects whereas under optimum conditions they can be ex- ceeded.

MigObject

Description

  • C N

 

PM

M

Min

Max

           

Obj/hr

Obj/hr

DEVICE

Devices

 

X

  • X X

X

 

20.000

40.000

CONNOBJ

Connection Objects