Vous êtes sur la page 1sur 3

Performance Tips for Data Transfer Processes

Request processing, the process of loading a data transfer process (DTP), can take place in various parallelization upgrades in the extraction and processing (transformation and update) steps. The s stem selects the most appropriate and efficient processing for the DTP in accordance !ith the settings in the DTP maintenance transaction, and creates a DTP processing mode. To further optimize the performance of request processing, there are a num"er of further measures that ou can take# $ taking the appropriate measures, ou can o"tain a processing mode !ith a higher degree of parallelization. % variet of measures can help to improve performance, in particular the settings in the DTP maintenance transaction. &ome of these measures are source'specific and some are data t pe'specific. The follo!ing sections descri"e the various measures that can "e taken.

Higher Parallelization in the Request Processing Steps


(ith a (standard) DTP, ou can modif an existing s stem'defined processing mode " changing the settings for error handling and semantic grouping. The follo!ing ta"le sho!s ho! ou can optimize the performance of an existing DTP processing mode# Original State of DTP Processing Mode &erial extraction and processing of the source packages (processing mode )) &erial extraction and processing of the source packages (processing mode )) &erial extraction, immediate parallel processing (processing mode *) Processing Mode with Optimized Performance &erial extraction, immediate parallel processing (processing mode *) Parallel extraction and processing (processing mode +) Parallel extraction and processing (processing mode +) Measures to Obtain Performance-Optimized Processing Mode &elect the grouping fields

,nl possi"le !ith persistent staging area (P&%) as the source# Deactivate error handling ,nl possi"le !ith P&% as the source# Deactivate error handling Remove grouping fields selection

urther Performance-Optimizing Measures


Set the number of parallel processes for a DTP during request processing To optimize the performance of data transfer processes !ith parallel processing, ou can set the num"er of permitted "ackground processes for process t pe Set Data Transfer Process glo"all in $- $ackground .anagement. To further optimize performance for a given data transfer process, ou can override the glo"al setting# -n the DTP maintenance transaction, choose Goto Batch Manager Setting. /nder Number of Processes, specif ho! man "ackground processes should "e used to process the DTP, and save our entries.

.ake sure that there are enough "ackground processes availa"le for processing. 0ote that !hen multiple data transfer processes are executed at the same time, each individual data transfer process uses the num"er of "ackground processes set. .ore information# &etting Parallel Processing of $- Processes Set the size of data pac!ages -n the standard setting in the data transfer process, the size of a data package is set to 12,222 data records, on the assumption that a data record has a !idth of +,222 " tes. To improve performance, ou can set the value for the data package size, depending on the size of the main memor . 3nter this value under Package Size on the Extraction ta" in the DTP maintenance transaction. "#oid too large DTP requests with a large number of source requests$ Retrie#e the data one request at a time % DTP request can "e ver large, since it "undles together all transfer'relevant requests from the source. To improve performance, ou can stipulate that a DTP request al!a s reads 4ust one request at a time from the source. To make this setting, select Get All New Data in Source b !e"uest on the Extraction ta" in the DTP maintenance transaction. ,nce processing is completed, the DTP request checks for further ne! requests in the source. -f it finds an , it automaticall creates an additional DTP request. %ith DataSources as the source$ "#oid too small data pac!ages when using the DTP filter -f ou extract from a Data&ource !ithout error handling and a large amount of data is excluded " the filter, this can cause the data packages loaded " the process to "e ver small. To improve performance, ou can modif this "ehavior " activating error handling and defining a grouping ke . &elect an error handling option on the #$%ating ta" in the DTP maintenance function. Then define a suita"le grouping ke on the Extraction ta" under Semantic Grou$s. This ensures that all data records "elonging to a grouping ke !ithin one package are extracted and processed. %ith DataStore ob&ects as the source$ '(tract data from the table of acti#e data for the first delta or during full e(traction The change log gro!s faster than the ta"le of active data, since it stores "efore and after images. To optimize performance during extraction in the 5ull mode or !ith the first delta from the Data&tore o"4ect, ou can read the data from the ta"le of active data instead of from the change log. To make this setting, select Acti&e Table 'with Archi&e( or Acti&e Table 'without Archi&e( on the Extraction ta" in Extraction from) or Delta Extraction from) in the DTP maintenance function. %ith )nfo*ubes as the source$ +se e(traction from aggregates (ith -nfo6u"e extraction, in the standard setting the data is read from the fact ta"le (5 ta"le) and the ta"le of compressed data (3 ta"le). To improve performance here, ou can use aggregates for the extraction. &elect #se Aggregates on the Extraction ta" in the DTP maintenance transaction. The s stem then compares the outgoing quantit from the transformation !ith the aggregates. -f all -nfo,"4ects from the outgoing quantit are used in aggregates, the data is read from the aggregates during extraction instead of from the -nfo6u"e ta"les. ,ote for using )nfoPro#iders as the source -f not all ke fields for the source -nfoProvider in the transformation have target fields assigned to them, the ke figures for the source !ill "e aggregated " the unselected ke fields in the source during extraction. 7ou can prevent this automatic aggregation " implementing a start routine or an intermediate -nfo&ource. 0ote, ho!ever, that this affects the performance of the data transfer process. 5or more information, see &%P 0ote ++898+:.

or master data -attributes and te(ts.$ Deacti#ate chec! for duplicate data records (hen ou cop master data attri"utes or texts !ith a DTP, ou can use a flag in the DTP maintenance to check for duplicate data records for the characteristic ke of the master data characteristic. This prevents a runtime error from occurring if there are multiple data records for a given characteristic ke in a data package. -f the flag is set, the updating process is slo!er. To improve the performance, ou can deactivate the check if the follo!ing requirements are fulfilled# The ke of the Data&ource provides unique data records. 5or $- 6ontent Data&ources from &%P source s stems, the field Deli&er of Du$licate Data !ecor%s on the General ta" page in the $- Data&ource maintenance displa s the "ehavior of the Data&ource. ,nl deactivate the check in the DTP if the field has the value None. &ince the field is for information onl and is not checked during the extraction, !e recommend that ou al!a s check the "ehavior of the Data&ource "efore deactivating the check in the DTP. ,nl a single request is copied from the P&%. To make this setting, select Get All New Data in Source b !e"uest on the Extraction ta" in the DTP maintenance transaction. .ore information is availa"le further up in this section. The ke of the Data&ource corresponds to the characteristic ke of the target -nfo,"4ect or represents a su"set of the characteristic ke . The num"er of records for each characteristic ke is not changed in the start or end routines of the transformation.

Vous aimerez peut-être aussi