Vous êtes sur la page 1sur 11

CONTINUOUS :

Batch Subscribe : Reads from input files until EOF or until parameter 'while' is false, and then saves file positions and exits. Continuous Multi Update Table : Executes multiple SQL statements for each input record. Continuous Rollup : Aggregates records, giving partial results at regular intervals. Continuous Scan : Computes running totals or year-to-date summaries for groups of related records, but does not require sorted input. Continuous Update Table : Executes update or insert statements in embedded SQL format to modify a database table. Multipublish : Writes to a continuous queue (multiple writers allowed) Publish : Writes to a continuous data set. Subscribe : Reads from a continuous data set. Universal Adapter : Run a program using the universal adapter protocol to process data UniversalPublish : Publish continuous data using an external program UniversalSubscribe : Reads from a continuous data source.

PARTITION
Broadcast : Distributes data by combining input data records into a single flow and writing a copy of that flow to each output flow partition. Partition by Expression : Distributes data records to its output flow partitions according to a specified DML expression. Partition by Key : Distributes data records to its output flow partitions according to key values. Partition by Percentage : Distributes a specified percentage of the total number of input data records to each output flow. Partition by Range : Distributes data records to its output flow partitions according to ranges of key values specified for each partition. Partition by Round-robin : Distributes data records evenly to each output flow in round-robin fashion. Use the Interleave component to reverse the effects of Partition by Round-robin. Partition with Load Balance : Distributes data records to output flow partitions, writing more records to the flow partitions that consume records faster.

TRANSFORM :
Aggregate : Generates summary records for groups of input records. It maximizes performance by keeping intermediate results in main memory. Dedup Sorted : Separates one specified data record in each group of data records from the rest of the records in the group. Denormalize Sorted : Consolidates groups of related records in the input to a single data record with a vector field for each group, and optionally computes summary fields for each group. It's the inverse of Normalize. Filter by Expression : Filters data records according to a specified DML expression. Fuse : Applies transform to corresponding records from each input flow. The transform is first applied to the first record on each flow, then to the second, and so on. The result of the transform is sent out of the out port. Join : Performs inner, outer, and semi-joins with multiple flows of data records. It maximizes performance by loading input data records into main memory. Match Sorted : Combines and performs transform operations on multiple flows of data records. Multi Reformat : Changes the record format of your data by dropping fields or by using DML expressions to add fields, combine fields, or modify the data.

This component only allows 1 transform per in port, but it will not block if downstream components read at different rates. Normalize Generates multiple output data records from each input data record. Normalize can separate a data record with a vector field into several individual records, each containing one element of the vector. Rollup : Generates data records that summarize groups of data records. Rollup in Memory maximizes performance by keeping intermediate results in main memory. Scan : Generates a series of cumulative summary records--such as year-to-date totals--for groups of data records. Scan Sorted requires grouped input.

DATA BASE :
Load DB Table: Loads your data into a DB2, Informix, or Oracle database. Note: Before using Load DB Table, create a table configuration file and record format file with the db_config utility. Run DB SQL : Executes SQL statements in a DB2, Informix or Oracle database. Truncate DB Table : Deletes all rows in a DB2, Informix, or Oracle table. Note: Before using Truncate DB Table, create a table configuration file with the db_config utility. Unload DB Table : Unloads data records from a DB2, Informix, or Oracle database into an Ab Initio graph. Note: Before using Unload DB Table, create a table configuration file and record format file with the db_config utility. Update DB Table : Executes update or insert statements in embedded SQL format to modify a table in an Oracle database. Note: Before using Update DB Table, create a table configuration file and a record format file with the db_config utility. DB2 USS Loader : Load DB2 OS/390 Tables with the DSNUTILB LOAD command. Partition by DB2EEE : Partitions data for DB2 EEE. Join with IMS : Performs a database query for every input record Join with IMS Segments :

Gets data from segment occurrence and all dependent occurrences for every input record NCR DeleteRows : This component is an enhanced interface to the Teradata DELETE Task. Use this to delete rows in a Teradata table. NCR StoreData : This component provides access to the multi-table, multi-dml-statement capability of the Teradata Multiload and Tpump load utilities. Join with DB : Performs a database query for every input record Multi Update Table : Executes multiple SQL statements for each input record.

Max core of components Scan Rollup Join Replicate Sort : 10 Megabytes : 64 Megabytes : 64 Megabytes : 64 Megabytes : 100 Megabytes

DEPARTITION: Merge : Combines data records from multiple flow partitions that have been sorted according to the key specifier, and maintains the sort order.
Runtime Behavior

Merge reads from its flows in a specific order, so it might cause deadlock (see Flow Buffering in the Shell Development Environment User's Guide for details).

Concatenate : Appends multiple flow partitions of data records one after another.
Runtime Behavior

The in port for Concatenate is ordered. See Handling Ordered Flows . Because Concatenate reads from its flows in a specific order, it might cause deadlock. See Flow Buffering in the Shell Development Environment User's Guide for details. Concatenate does not support default record assignment. As a result, make the record format of the in and out ports identical or the output is unpredictable. The Concatenate component: 1. Reads all the data records from the first flow connected to the in port (counting from top to bottom on the graph) and copies them to the out port. 2. Then reads all the data records from the second flow connected to the in port and appends them to those of the first flow, and so on.

Interleave : Combines blocks of data records from multiple flow partitions in round-robin fashion. Gather : Combines data records from multiple flow partitions arbitrarily.
Runtime Behavior

You can use Gather to:


Reduce data parallelism, by connecting a single fan-in flow to the in port Reduce component parallelism, by connecting multiple straight flows to the in port Reads data records from the flows connected to the in port Combines the records arbitrarily Writes the combined records to the out port

The Gather component:


What is a Phase?
A phase is a stage of a graph that runs to completion before the start of the next stage. By dividing a graph into phases, you can save resources, avoid deadlock, and safeguard against failures. To protect a graph, all phases are checkpoints by default.

What is a Checkpoint
A checkpoint is a phase that acts as an intermediate stopping point in a graph and saves status information to allow you to recover from failures. By assigning phases with checkpoints to a graph, you can recover completed stages of the graph if failure occurs.

About Sandboxes
A sandbox is a collection of graphs and related files that are stored in a single directory tree, and treated as a group for purposes of version control, navigation, and migration. A sandbox can be a file system copy of a datastore project.

air lock break {[-object object] ... | [-project project] ... | [-all]} air lock release { [-object object] ... | [-project project] ... | [-all] } air lock reset {[-user username] ... | [-object object] ... |

Lookup Types :

lookup :

Returns a data record from a lookup file.

Syntax : record lookup ( string file_label, expression [,


expression ...] )

lookup_count :
lookup file.

Returns the number of matching data records in a

Syntax : int lookup_count (string file_label, expression [,


expression...] )

lookup_count_local
Returns the number of matching data records in a partition of a lookup file.

Syntax :
int lookup_count_local (string file_label, expression [ {, expression } ...] )

lookup_local :
file.

Returns a data record from a partition of a lookup

Syntax : record lookup_local ( string file_label, expression [,


expression ...] )

lookup_next :
file.

Returns successive data records from a lookup

Syntax : record lookup_next ( string file_label)

lookup_next_local
Returns successive data records from the partition of a lookup file.

Syntax : record lookup_next_local (string file_label)

String_lpad Returns a string padded with a specified character to a specified length. Syntex: string string_lpad (string str, int len [ , string pad_char] ) String_filter Returns the characters of one string which also appear in another. Syntex: string string_filter (string str1, string str2) String_index Returns the index of the first character of the first occurrence of a string within another string. Syntex: int string_index (string str, string seek) String_lrepad Returns a string trimmed of leading blanks and left-padded with a specified character to a specified length. Syntex: string string_lrepad (string str, int len [ , string pad_char ] )

Vous aimerez peut-être aussi