Académique Documents
Professionnel Documents
Culture Documents
PREPARED BY:
Ammar Hasan
CONTENTS
CHAPTER 1: TOOL KNOWLEDGE
1.1 Informatica PowerCenter
1.2 Product Overview
1.2.1 PowerCenter Domain
1.2.2 Administration Console
1.2.3 PowerCenter Repository
1.2.4 PowerCenter Client
1.2.5 Repository Service
1.2.6 INTEGRATION SERVICE
1.2.7 WEB SERVICES HUB
1.2.8 DATA ANALYZER
1.2.9 METADATA MANAGER
CHAPTER 3: DESIGNER
3.1 Source Analyzer
3.1.1 Working with Relational Sources
3.1.2 Working with Flat Files
3.2 Target Designer
3.3 Mappings
3.4 Transformations
3.4.1 Working with Ports
3.4.2 Using Default Values for Ports
3.4.3 User-Defined Default Values
3.5 Tracing Levels
3.6 Basic First Mapping
3.7 Expression Transformation
3.8 Filter Transformation
3.9 Router Transformation
3.10 Union Transformation
3.11 Sorter Transformation
3.12 Rank Transformation
3.13 Aggregator Transformation
3.14 Joiner Transformation
3.15 Source Qualifier
3.16 Lookup Transformation
3.16.1 Lookup Types
3.16.2 Lookup Transformation Components
3.16.3 Connected Lookup Transformation
3.16.4 Unconnected Lookup Transformation
3.16.5 Lookup Cache Types: Dynamic, Static, Persistent, Shared
3.17 Update Strategy
3.18 Dynamic Lookup Cache Use
3.19 Lookup Query
3.20 Lookup and Update Strategy Examples
Example to Insert and Update without a Primary Key
Example to Insert and Delete based on a condition
3.21 Stored Procedure Transformation
3.21.1 Connected Stored Procedure Transformation
3.21.2 Unconnected Stored Procedure Transformation
3.22 Sequence Generator Transformation
3.23 Mapplets: Mapplet Input and Mapplet Output Transformations
3.24 Normalizer Transformation
3.25 XML Sources Import and usage
3.26 Mapping Wizards
3.26.1 Getting Started
3.26.2 Slowly Changing Dimensions
3.27 Mapping Parameters and Variables
3.28 Parameter File
3.29 Indirect Flat File Loading
Informatica
PowerCenter
CHAPTER 1: TOOL KNOWLEDGE
Data Cleanse and Match Option features powerful, integrated cleansing and
matching capabilities to correct and remove duplicate customer data.
PowerCenter also provides the ability to view and analyze business information and
browse and analyze metadata from disparate metadata repositories.
Service Manager: The Service Manager is built in to the domain to support the
domain and the application services. The Service Manager runs on each node in the
domain. The Service Manager starts and runs the application services on a machine.
The Service Manager performs the following functions:
• Alerts: Provides notifications about domain and service events.
• Authentication: Authenticates user requests
• Authorization: Authorizes user requests for services.
• Domain configuration: Manages domain configuration metadata.
• Node configuration: Manages node configuration metadata.
• Licensing: Registers license information and verifies license information
• Logging: Provides accumulated log events from each service in the domain.
Use the Administration Console to perform the following tasks in the domain:
Manage application services: Manage all application services in the domain, such
as the Integration Service and Repository Service.
Configure nodes: Configure node properties, such as the backup directory and
resources. We can also shut down and restart nodes.
Manage domain objects: Create and manage objects such as services, nodes,
licenses, and folders. Folders allow you to organize domain objects and to manage
security by setting permissions for domain objects.
View and edit domain object properties: You can view and edit properties for all
objects in the domain, including the domain object.
View log events: Use the Log Viewer to view domain, Integration Service, SAP BW
Service, Web Services Hub, and Repository Service log events.
Other domain management tasks include applying licenses, managing grids and
resources, and configuring security.
1.2.3 POWERCENTER REPOSITORY
The PowerCenter repository resides in a relational database. The repository database
tables contain the instructions required to extract, transform, and load data and
store administrative information such as user names, passwords, permissions, and
privileges. PowerCenter applications access the repository through the Repository
Service.
We administer the repository using the Repository Manager Client tool, the
PowerCenter Administration Console, and command line programs.
Global repository: The global repository is the hub of the repository domain. Use
the global repository to store common objects that multiple developers can use
through shortcuts. These objects may include operational or Application source
definitions, reusable transformations, mapplets, and mappings.
Local repositories: A local repository is any repository within the domain that is
not the global repository. Use local repositories for development. From a local
repository, you can create shortcuts to objects in shared folders in the global
repository. These objects include source definitions, common dimensions and
lookups, and enterprise standard transformations. You can also create copies of
objects in non-shared folders.
Designer:
Use the Designer to create mappings that contain transformation instructions for the
Integration Service.
The Designer has the following tools that you use to analyze sources, design target
schemas, and build source-to-target mappings:
• Source Analyzer: Import or create source definitions.
• Target Designer: Import or create target definitions.
• Transformation Developer: Develop transformations to use in mappings.
You can also develop user-defined functions to use in expressions.
• Mapplet Designer: Create sets of transformations to use in mappings.
• Mapping Designer: Create mappings that the Integration Service uses to
extract, transform, and load data.
Data Stencil
Use the Data Stencil to create mapping template that can be used to generate
multiple mappings. Data Stencil uses the Microsoft Office Visio interface to create
mapping templates. Not used by a developer usually.
Repository Manager
Use the Repository Manager to administer repositories. You can navigate through
multiple folders and repositories, and complete the following tasks:
• Manage users and groups: Create, edit, and delete repository users and
user groups. We can assign and revoke repository privileges and folder
permissions.
• Perform folder functions: Create, edit, copy, and delete folders. Work
we perform in the Designer and Workflow Manager is stored in folders. If we
want to share metadata, you can configure a folder to be shared.
We create repository objects using the Designer and Workflow Manager Client tools.
We can view the following objects in the Navigator window of the Repository
Manager:
Target definitions: Definitions of database objects or files that contain the target
data.
Sessions and workflows: Sessions and workflows store information about how and
when the Integration Service moves data. A workflow is a set of instructions that
describes how and when to run tasks related to extracting, transforming, and loading
data. A session is a type of task that you can put in a workflow. Each session
corresponds to a single mapping.
Workflow Manager
Use the Workflow Manager to create, schedule, and run workflows. A workflow is a
set of instructions that describes how and when to run tasks related to extracting,
transforming, and loading data.
The Workflow Manager has the following tools to help us develop a workflow:
When we create a workflow in the Workflow Designer, we add tasks to the workflow.
The Workflow Manager includes tasks, such as the Session task, the Command task,
and the Email task so you can design a workflow. The Session task is based on a
mapping we build in the Designer.
We then connect tasks with links to specify the order of execution for the tasks we
created. Use conditional links and workflow variables to create branches in the
workflow.
Workflow Monitor
Use the Workflow Monitor to monitor scheduled and running workflows for each
Integration Service.
We can view details about a workflow or task in Gantt Chart view or Task view. We
can run, stop, abort, and resume workflows from the Workflow Monitor. We can view
sessions and workflow log events in the Workflow Monitor Log Viewer.
The Workflow Monitor displays workflows that have run at least once. The Workflow
Monitor continuously receives information from the Integration Service and
Repository Service. It also fetches information from the repository to display historic
information.
1.2.5 REPOSITORY SERVICE
All repository client applications access the repository database tables through the
Repository Service. The Repository Service protects metadata in the repository by
managing repository connections and using object-locking to ensure object
consistency. The Repository Service also notifies us when another user modifies or
deletes repository objects we are using.
The Repository Service uses native drivers to communicate with the repository
database.
The Repository Service accepts connection requests from the following applications:
PowerCenter Client: Use the Designer and Workflow Manager to create and
store mapping metadata and connection object information in the repository. Use the
Workflow Monitor to retrieve workflow run status information and session logs
written by the Integration Service. Use the Repository Manager to organize and
secure metadata by creating folders, users, and groups.
Integration Service (IS): When we start the IS, it connects to the repository to
schedule workflows. When we run a workflow, the IS retrieves workflow task and
mapping metadata from the repository. IS writes workflow status to the repository.
Web Services Hub: When we start the Web Services Hub, it connects to the
repository to access web-enabled workflows. The Web Services Hub retrieves
workflow task and mapping metadata from the repository and writes workflow status
to the repository.
SAP BW Service: Listens for RFC requests from SAP NetWeaver BW and initiates
workflows to extract from or load to SAP BW.
We install the Repository Service when we install PowerCenter Services. After we
install the PowerCenter Services, we can use the Administration Console to manage
the Repository Service.
Repository Connectivity:
PowerCenter applications such as the PowerCenter Client, the Integration Service,
pmrep, and infacmd connect to the repository through the Repository Service.
The following process describes how a repository client application connects to the
repository database:
2) The Service Manager sends back the host name and port number of the node
running the Repository Service. If you have the high availability option, you
can configure the Repository Service to run on a backup node. Node A in
above diagram.
3) The repository client application establishes a link with the Repository Service
process on node A. This communication occurs over TCP/IP.
4) The Repository Service process communicates with the repository
database and performs repository metadata transactions for the client
application.
Understanding Metadata
The repository stores metadata that describes how to extract, transform, and load
source and target data. PowerCenter metadata describes several different kinds of
repository objects. We use different PowerCenter Client tools to develop each kind of
object.
We can also extend the metadata stored in the repository by associating information
with repository objects. For example, when someone in our organization creates a
source definition, we may want to store the name of that person with the source
definition. We associate information with repository metadata using metadata
extensions.
Administering Repositories
We use the PowerCenter Administration Console, the Repository Manager, and the
pmrep and infacmd command line programs to administer repositories.
A workflow is a set of instructions that describes how and when to run tasks related
to extracting, transforming, and loading data. The Integration Service runs workflow
tasks. A session is a type of workflow task. A session is a set of instructions that
describes how to move data from sources to targets using a mapping.
It extracts data from the mapping sources and stores the data in memory while it
applies the transformation rules that you configure in the mapping. The Integration
Service loads the transformed data into the mapping targets.
The Integration Service can combine data from different platforms and source types.
For example, you can join data from a flat file and an Oracle source. The Integration
Service can also load data to different platforms and target types.
When we install PowerCenter Services, the PowerCenter installer installs the Web
Services Hub.
This is not used by Informatica Developer normally and not in scope of our
training.
1.2.8 DATA ANALYZER
PowerCenter Data Analyzer provides a framework to perform business analytics on
corporate data. With Data Analyzer, we can extract, filter, format, and analyze
corporate information from data stored in a data warehouse, operational data store,
or other data storage models. Data Analyzer uses a web browser interface to view
and analyze business information at any level.
Data Analyzer has a repository that stores metadata to track information about
enterprise metrics, reports, and report delivery. Once an administrator installs Data
Analyzer, users can connect to it from any computer that has a web browser and
access to the Data Analyzer host.
Metadata Manager uses Data Analyzer functionality. We can use the embedded Data
Analyzer features to design, develop, and deploy metadata reports and dashboards.
Repository
Manager
CHAPTER 2: REPOSITORY MANAGER
We can navigate through multiple folders and repositories and perform basic
repository tasks with the Repository Manager. This is an administration tool and
used by Informatica Administrator.
2. Enter the name of the repository and a valid repository user name.
3. Click OK.
Before we can connect to the repository for the first time, we must configure the
connection information for the domain that the repository belongs to.
2.2 Configuring a Domain Connection
1. In a PowerCenter Client tool, select the Repositories node in the Navigator.
2. Click Repository > Configure Domains to open the Configure Domains dialog
box.
3. Click the Add button. The Add Domain dialog box appears.
4. Enter the domain name, gateway host name, and gateway port number.
5. Click OK to add the domain connection.
Steps:
1. Connect to the repository.
2. Select the object of use in navigator.
3. Click Analyze and Select the dependency we want to view.
Steps:
1. Select the objects you want to validate.
2. Click Analyze and Select Validate
3. Select validation options from the Validate Objects dialog box
4. Click Validate.
5. Click a link to view the objects in the results group.
Steps:
1. In the Repository Manager, connect to the repository.
2. In the Navigator, select the object you want to compare.
3. Click Edit > Compare Objects.
4. Click Compare in the dialog box displayed.
2.7 Truncating Workflow and Session Log Entries
When we configure a session or workflow to archive session logs or workflow logs,
the Integration Service saves those logs in local directories. The repository also
creates an entry for each saved workflow log and session log. If we move or delete a
session log or workflow log from the workflow log directory or session log directory,
we can remove the entries from the repository.
Steps:
1. In the Repository Manager, select the workflow in the Navigator window or in
the Main window.
2. Choose Edit > Truncate Log. The Truncate Workflow Log dialog box appears.
3. Choose to delete all workflow and session log entries or to delete all workflow
and session log entries with an end time before a particular date.
4. If you want to delete all entries older than a certain date, enter the date and
time.
5. Click OK.
Repository object locks: The repository locks repository objects and folders by
user. The repository creates different types of locks depending on the task. The
Repository Service locks and unlocks all objects in the repository.
User connections: Use the Repository Manager to monitor user connections to the
repository. We can end connections when necessary.
Steps:
1. Launch the Repository Manager and connect to the repository.
2. Click Edit > Show User Connections or Show locks
3. The locks or user connections will be displayed in a window.
4. We can do the rest as per our need.
2.9 Managing Users and Groups
1. In the Repository Manager, connect to a repository.
2. Click Security > Manage Users and Privileges.
3. Click the Groups tab to create Groups. or
4. Click the Users tab to create Users
5. Click the Privileges tab to give permissions to groups and users.
6. Select the options available to add, edit, and remove users and groups.
Administrators: This group initially contains two users that are created by default.
The default users are Administrator and the database user that created the
repository. We cannot delete these users from the repository or remove them from
the Administrators group.
Public: The Repository Manager does not create any default users in the Public
group.
3. Click ok.
Chapter 3
Designer
CHAPTER 3: DESIGNER
The Designer has tools to help us build mappings and mapplets so we can specify
how to move and transform data between sources and targets. The Designer helps
us create source definitions, target definitions, and transformations to build the
mappings.
The Designer lets us work with multiple tools at one time and to work in multiple
folders and repositories at the same time. It also includes windows so we can view
folders, repository objects, and tasks.
Designer Tools:
• Source Analyzer: Use to import or create source definitions for flat file, XML,
COBOL, Application, and relational sources.
• Target Designer: Use to import or create target definitions.
• Transformation Developer: Use to create reusable transformations.
• Mapplet Designer: Use to create mapplets.
• Mapping Designer: Use to create mappings.
Designer Windows:
Overview Window
Designer Windows
Designer Tasks:
• Add a repository.
• Print the workspace.
• View date and time an object was last saved.
• Open and close a folder.
• Create shortcuts.
• Check out and in repository objects.
• Search for repository objects.
• Enter descriptions for repository objects.
• View older versions of objects in the workspace.
• Revert to a previously saved object version.
• Copy objects.
• Export and import repository objects.
• Work with multiple objects, ports, or columns.
• Rename ports.
• Use shortcut keys.
3.1 SOURCE ANALYZER
In Source Analyzer, we define the source definitions that we will use in a mapping.
We can import or create the following types of source definitions in the Source
Analyzer:
However, when we add a source definition with special characters to a mapping, the
Designer either retains or replaces the special character. Also, when we generate the
default SQL statement in a Source Qualifier transformation for a relational source,
the Designer uses quotation marks around some special characters. The Designer
handles special characters differently for relational and non-relational sources.
1. Connect to repository.
2. Right click the folder where you want to import source definition and click
open. The folder which is connected gets bold. We can work in only one folder
at a time.
3. In the Source Analyzer, click Sources > Import from Database.
4. Select the ODBC data source used to connect to the source database. If you
need to create or modify an ODBC data source, click the Browse button to
open the ODBC Administrator. Create the data source, and click OK. Select
the new ODBC data source.
5) Enter a database user name and password to connect to the database.
6) Click Connect. Table names will appear.
7) Select the relational object or objects you want to import.
8) Click OK.
9) Click Repository > Save.
We can update a source definition to add business names or to reflect new column
names, datatypes, or other changes. We can update a source definition in the
following ways:
Edit the definition: Manually edit the source definition if we need to configure
properties that we cannot import or if we want to make minor changes to the source
definition.
Reimport the definition: If the source changes are significant, we may need to
reimport the source definition. This overwrites or renames the existing source
definition. We can retain existing primary key-foreign key relationships and
descriptions in the source definition being replaced.
We can import fixed-width and delimited flat file definitions that do not contain
binary data. When importing the definition, the file must be in a directory
local to the client machine. In addition, the Integration Service must be able to
access all source files during the session.
When we import a flat file in the Designer, the Flat File Wizard uses the file name as
the name of the flat file definition by default. We can import a flat file with any valid
file name through the Flat File Wizard. However, the Designer does not recognize
some special characters in flat file source and target names.
When we import a flat file, the Flat File Wizard changes invalid characters and spaces
into underscores ( _ ). For example, you have the source file "sample
prices+items.dat". When we import this flat file in the Designer, the Flat File Wizard
names the file definition sample_prices_items by default.
Steps:
1) Repeat Steps 1-5 as in case of fixed width.
2) Click Next.
3) Enter the following settings:
Text Qualifier Required Quote character that defines the boundaries of text
strings. Choose No Quote, Single Quote, or Double
Quotes.
The way to handle target flat files is also same as described in the above
sections. Just make sure that instead of Source Analyzer,
Select Tools -> Target Designer.
Rest is same.
3.2 TARGET DESIGNER
Before we create a mapping, we must define targets in the repository. Use the
Target Designer to import and design target definitions. Target definitions include
properties such as column names and data types.
Steps:
1. In the Target Designer, select the relational target definition you want to
create in the database. If you want to create multiple tables, select all
relevant table definitions.
2. Click Targets > Generate/Execute SQL.
3. Click Connect and select the database where the target table should be
created. Click OK to make the connection.
4. Click Generate SQL File if you want to create the SQL script, or Generate and
Execute if you want to create the file, and then immediately run it.
5. Click Close.
3.3 MAPPINGS
A mapping is a set of source and target definitions linked by transformation objects
that define the rules for data transformation. Mappings represent the data flow
between sources and targets. When the Integration Service runs a session, it uses
the instructions configured in the mapping to read, transform, and write data.
Mapping Components:
• Source definition: Describes the characteristics of a source table or file.
• Transformation: Modifies data before writing it to targets. Use different
transformation objects to perform different functions.
• Target definition: Defines the target table or file.
• Links: Connect sources, targets, and transformations so the Integration
Service can move the data as it transforms it.
Types of Transformations:
Active: An active transformation can change the number of rows that pass through
it, such as a Filter transformation that removes rows that do not meet the filter
condition.
Passive: A passive transformation does not change the number of rows that pass
through it, such as an Expression transformation that performs a calculation on data
and passes all rows through the transformation.
Creating Ports:
We can create a new port in the following ways:
• Drag a port from another transformation. When we drag a port from another
transformation the Designer creates a port with the same properties, and it
links the two ports. Click Layout > Copy Columns to enable copying ports.
• Click the Add button on the Ports tab. The Designer creates an empty port
you can configure.
• Input port: The system default value for null input ports is NULL. It displays
as a blank in the transformation. If an input value is NULL, the Integration
Service leaves it as NULL.
• Output port: The system default value for output transformation errors is
ERROR. The default value appears in the transformation as
ERROR(`transformation error'). If a transformation error occurs, the
Integration Service skips the row. The Integration Service notes all input rows
skipped by the ERROR function in the session log file.
• Input/output port: The system default value for null input is the same as
input ports, NULL. The system default value appears as a blank in the
transformation. The default value for output transformation errors is the same
as output ports.
Note: Variable ports do not support default values. The Integration Service initializes
variable ports according to the datatype.
Note: The Integration Service ignores user-defined default values for unconnected
transformations.
3.4.3 User-defined default values
Constant value: Use any constant (numeric or text), including NULL.
Example: 0, 9999, ‘Unknown Value’, NULL
ERROR: Generate a transformation error. Write the row and a message in the
session log or row error log. The Integration Service writes the row to session log or
row error log based on session configuration.
Use the ERROR function as the default value when we do not want null values to
pass into a transformation. For example, we might want to skip a row when the input
value of DEPT_NAME is NULL. You could use the following expression as the default
value:
ERROR('Error. DEPT is NULL')
ABORT: Abort the session. Session aborts when the Integration Service encounters
a null input value. The Integration Service does not increase the error count or write
rows to the reject file.
Example: ABORT(‘DEPT is NULL')
3.5 TRACING LEVELS
When we configure a transformation, we can set the amount of detail the Integration
Service writes in the session log.
Level Description
• Change the tracing level to a Verbose setting only when we need to debug a
transformation that is not behaving as expected.
• To add a slight performance boost, we can also set the tracing level to Terse.
3.6 BASIC FIRST MAPPING
First make sure that we have created a shared folder and a folder with the name of
developer along with user as described in Installation Guide.
We will transfer data from EMP table in source to EMP_Tgt table in target.
Note: We can edit the source definition by dragging the table in Source Analyzer
only.
We are doing this for our practice only. In a project, all the source tables
and target tables are created by DBA. We just import the definition of
tables.
• Now we have all the tables we need in shared folder.
• We now need to create shortcut to these in our folder.
Shortcut use:
• If we will select paste option, then the copy of EMP table definition will be
created.
• Suppose, we are 10 people and 5 using shortcut and 5 are copying the
definition of EMP.
• Now suppose the definition of EMP changes in database.
• We will now reimport the EMP definition and old definition will be replaced.
• Developers who were using shortcuts will see that the changes have
been reflected in mapping automatically.
• Developers using copy will have to reimport manually.
• So for maintenance and ease, we use shortcuts to source and target
definitions in our folder and short to other reusable transformations and
mapplets.
Creating Mapping:
1. Open folder where we want to create the mapping.
2. Click Tools -> Mapping Designer.
3. Click Mapping -> Create -> Give mapping name. Ex: m_basic_mapping
4. Drag EMP from source and EMP_Tgt from target in mapping.
5. Link ports from SQ_EMP to EMP_Tgt.
6. Click Mapping -> Validate
7. Repository -> Save
Creating Session:
Now we will create session in workflow manager.
Creating Workflow:
1. Now Click Tools -> Workflow Designer
2. Workflow -> Create -> Give name like wf_basic_mapping
3. Click ok
4. START task will be displayed. It is the starting point for Informatica server.
5. Drag session to workflow.
6. Click Task-> Link Task. Connect START to the session.
7. Click Workflow -> Validate
8. Repository Save.
1. Go back to Workflow Manager. Select the workflow and right click on the
workflow wf_basic_mapping.
2. Select Start Workflow.
Use the Expression transformation to calculate values in a single row before we write
to the target. For example, we might need to adjust employee salaries, concatenate
first and last names, or convert strings to numbers.
We can also use the Expression transformation to test conditional statements before
we output the results to target tables or other transformations. Example: IF, Then,
Decode
Calculating Values
To use the Expression transformation to calculate values for a single row, we must
include the following ports:
• Input or input/output ports for each value used in the calculation: For
example: To calculate Total Salary, we need salary and commission.
• Output port for the expression: We enter one expression for each output
port. The return value for the output port needs to match the return value of
the expression.
• Import the source table EMP in Shared folder. If it is already there, then don’t
import.
• In shared folder, create the target table Emp_Total_SAL. Keep all ports as in
EMP table except Sal and Comm in target table. Add Total_SAL port to store
the calculation.
• Create the necessary shortcuts in the folder.
Creating Mapping:
1. Open folder where we want to create the mapping.
2. Click Tools -> Mapping Designer.
3. Click Mapping -> Create -> Give mapping name. Ex: m_totalsal
4. Drag EMP from source in mapping.
5. Click Transformation -> Create -> Select Expression from list. Give name and
click Create. Now click done.
6. Link ports from SQ_EMP to Expression Transformation.
7. Edit Expression Transformation. As we do not want Sal and Comm in target,
remove check from output port for both columns.
8. Now create a new port out_Total_SAL. Make it as output port only.
9. Click the small button that appears in the Expression section of the dialog box
and enter the expression in the Expression Editor.
10. Enter expression SAL + COMM. You can select SAL and COMM from Ports tab
in expression editor.
Create Session and Workflow as described earlier. Run the workflow and
see the data in target table.
As COMM is null, Total_SAL will be null in most cases. Now open your
mapping and expression transformation. Select COMM port, In Default Value
give 0. Now apply changes. Validate Mapping and Save.
Refresh the session and validate workflow again. Run the workflow and see
the result again.
Now use ERROR in Default value of COMM to skip rows where COMM is null.
Syntax: ERROR(‘Any message here’)
Similarly, we can use ABORT function to abort the session if COMM is null.
Syntax: ABORT(‘Any message here’)
Make sure to double click the session after doing any changes in mapping. It will
prompt that mapping has changed. Click OK to refresh the mapping. Run workflow
after validating and saving the workflow.
3.8 FILTER TRANSFORMATION
• Active and connected transformation.
We can filter rows in a mapping with the Filter transformation. We pass all the rows
from a source transformation through the Filter transformation, and then enter a
filter condition for the transformation. All ports in a Filter transformation are
input/output and only rows that meet the condition pass through the Filter
transformation.
• Import the source table EMP in Shared folder. If it is already there, then don’t
import.
• In shared folder, create the target table Filter_Example. Keep all fields as in
EMP table.
• Create the necessary shortcuts in the folder.
Creating Mapping:
1. Open folder where we want to create the mapping.
2. Click Tools -> Mapping Designer.
3. Click Mapping -> Create -> Give mapping name. Ex: m_filter_example
4. Drag EMP from source in mapping.
5. Click Transformation -> Create -> Select Filter from list. Give name and click
Create. Now click done.
6. Pass ports from SQ_EMP to Filter Transformation.
7. Edit Filter Transformation. Go to Properties Tab
8. Click the Value section of the Filter condition, and then click the Open button.
9. The Expression Editor appears.
10. Enter the filter condition you want to apply.
11. Click Validate to check the syntax of the conditions you entered.
12. Click OK -> Click Apply -> Click Ok.
13. Now connect the ports from Filter to target table.
14. Click Mapping -> Validate
15. Repository -> Save
Create Session and Workflow as described earlier. Run the workflow and
see the data in target table.
IIF(ISNULL(FIRST_NAME),FALSE,TRUE)
This condition states that if the FIRST_NAME port is NULL, the return value is FALSE
and the row should be discarded. Otherwise, the row passes through to the next
transformation.
3.9 ROUTER TRANSFORMATION
• Active and connected transformation.
Mapping A uses three Filter transformations while Mapping B produces the same
result with one Router transformation.
A Router transformation consists of input and output groups, input and output ports,
group filter conditions, and properties that we configure in the Designer.
Working with Groups
A Router transformation has the following types of groups:
• Input: The Group that gets the input ports.
• Output: User Defined Groups and Default Group. We cannot modify or delete
output ports or their properties.
Creating Mapping:
1. Open folder where we want to create the mapping.
2. Click Tools -> Mapping Designer.
3. Click Mapping-> Create-> Give mapping name. Ex: m_router_example
4. Drag EMP from source in mapping.
5. Click Transformation -> Create -> Select Router from list. Give name and
click Create. Now click done.
6. Pass ports from SQ_EMP to Router Transformation.
7. Edit Router Transformation. Go to Groups Tab
8. Click the Groups tab, and then click the Add button to create a user-defined
group. The default group is created automatically..
9. Click the Group Filter Condition field to open the Expression Editor.
10. Enter a group filter condition. Ex: DEPTNO=10
11. Click Validate to check the syntax of the conditions you entered.
12. Create another group for EMP_20. Condition: DEPTNO=20
13. The rest of the records not matching the above two conditions will be passed
to DEFAULT group. See sample mapping
14. Click OK -> Click Apply -> Click Ok.
15. Now connect the ports from router to target tables.
16. Click Mapping -> Validate
17. Repository -> Save
Sample Mapping:
The Union transformation is a multiple input group transformation that you can use
to merge data from multiple pipelines or pipeline branches into one pipeline branch.
It merges data from multiple sources similar to the UNION ALL SQL statement to
combine the results from two or more SQL statements.
Create input groups on the Groups tab, and create ports on the Group Ports
tab. We can create one or more input groups on the Groups tab. The
Designer creates one output group by default. We cannot edit or delete the
default output group.
The Sorter transformation contains only input/output ports. All data passing through
the Sorter transformation is sorted according to a sort key. The sort key is one or
more ports that we want to use as the sort criteria.
• We can specify any amount between 1 MB and 4 GB for the Sorter cache
size.
• If it cannot allocate enough memory, the PowerCenter Server fails the
session.
• For best performance, configure Sorter cache size with a value less than
or equal to the amount of available physical RAM on the PowerCenter
Server machine.
• Informatica recommends allocating at least 8 MB (8,388,608 bytes) of
physical memory to sort data using the Sorter transformation.
2. Case Sensitive:
The Case Sensitive property determines whether the PowerCenter Server
considers case when sorting data. When we enable the Case Sensitive property,
the PowerCenter Server sorts uppercase characters higher than lowercase
characters.
3. Work Directory
Directory PowerCenter Server uses to create temporary files while it sorts data.
4. Distinct:
Check this option if we want to remove duplicates. Sorter will sort data according
to all the ports when it is selected.
Example: Sorting data of EMP by ENAME
Creating Mapping:
1. Open folder where we want to create the mapping.
2. Click Tools -> Mapping Designer.
3. Click Mapping-> Create-> Give mapping name. Ex: m_sorter_example
4. Drag EMP from source in mapping.
5. Click Transformation -> Create -> Select Sorter from list. Give name and click
Create. Now click done.
6. Pass ports from SQ_EMP to Sorter Transformation.
7. Edit Sorter Transformation. Go to Ports Tab
8. Select ENAME as sort key. CHECK mark on KEY in front of ENAME.
9. Click Properties Tab and Select Properties as needed.
10. Click Apply -> Ok.
11. Drag target table now.
12. Connect the output ports from Sorter to target table.
13. Click Mapping -> Validate
14. Repository -> Save
The Rank transformation allows us to select only the top or bottom rank of data. It
allows us to select a group of top or bottom values, not just one value.
During the session, the PowerCenter Server caches input data until it can perform
the rank calculations.
R Only 1 Rank port. Rank is calculated according to it. The Rank port is
an input/output port. We must link the Rank port to another
transformation. Example: Total Salary
Rank Index
The Designer automatically creates a RANKINDEX port for each Rank transformation.
The PowerCenter Server uses the Rank Index port to store the ranking position for
each row in a group.
For example, if we create a Rank transformation that ranks the top five salaried
employees, the rank index numbers the employees from 1 to 5.
• The RANKINDEX is an output port only.
• We can pass the rank index to another transformation in the mapping or
directly to a target.
• We cannot delete or edit it.
Defining Groups
Rank transformation allows us to group information. For example: If we want to
select the top 3 salaried employees of each Department, we can define a group for
department.
• By defining groups, we create one set of ranked rows for each group.
• We define a group in Ports tab. Click the Group By for needed port.
• We cannot Group By on port which is also Rank Port.
Creating Mapping:
1. Open folder where we want to create the mapping.
2. Click Tools -> Mapping Designer.
3. Click Mapping-> Create-> Give mapping name. Ex: m_rank_example
4. Drag EMP from source in mapping.
5. Create an EXPRESSION transformation to calculate TOTAL_SAL.
6. Click Transformation -> Create -> Select RANK from list. Give name and click
Create. Now click done.
7. Pass ports from Expression to Rank Transformation.
8. Edit Rank Transformation. Go to Ports Tab
9. Select TOTAL_SAL as rank port. Check R type in front of TOTAL_SAL.
10. Click Properties Tab and Select Properties as needed.
11. Top in Top/Bottom and Number of Ranks as 5.
12. Click Apply -> Ok.
13. Drag target table now.
14. Connect the output ports from Rank to target table.
15. Click Mapping -> Validate
16. Repository -> Save
RANK CACHE
When the PowerCenter Server runs a session with a Rank transformation, it
compares an input row with rows in the data cache. If the input row out-ranks a
stored row, the PowerCenter Server replaces the stored row with the input row.
Example: PowerCenter caches the first 5 rows if we are finding top 5 salaried
employees. When 6th row is read, it compares it with 5 rows in cache and places it in
cache is needed.
• All Variable ports if there, Rank Port, All ports going out from RANK
transformation are stored in RANK DATA CACHE.
• Example: All ports except DEPTNO In our mapping example.
3.13 AGGREGATOR TRANSFORMATION
• Connected and Active Transformation
• The Aggregator transformation allows us to perform aggregate calculations, such
as averages and sums.
• Aggregator transformation allows us to perform calculations on groups.
Conditional Clauses
We can use conditional clauses in the aggregate expression to reduce the number of
rows used in the aggregation. The conditional clause can be any clause that
evaluates to TRUE or FALSE.
• SUM( COMMISSION, COMMISSION > QUOTA )
Non-Aggregate Functions
We can also use non-aggregate functions in the aggregate expression.
• IIF( MAX( QUANTITY ) > 0, MAX( QUANTITY ), 0))
2> Group By Ports
• Indicates how to create groups.
• When grouping data, the Aggregator transformation outputs the last row of
each group unless otherwise specified.
The Aggregator transformation allows us to define groups for aggregations, rather
than performing the aggregation across all input data.
For example, we can find Maximum Salary for every Department.
Creating Mapping:
1. Open folder where we want to create the mapping.
2. Click Tools -> Mapping Designer.
3. Click Mapping-> Create-> Give mapping name. Ex: m_agg_example
4. Drag EMP from source in mapping.
5. Click Transformation -> Create -> Select AGGREGATOR from list. Give name
and click Create. Now click done.
6. Pass SAL and DEPTNO only from SQ_EMP to AGGREGATOR Transformation.
7. Edit AGGREGATOR Transformation. Go to Ports Tab
8. Create 4 output ports: OUT_MAX_SAL, OUT_MIN_SAL, OUT_AVG_SAL,
OUT_SUM_SAL
9. Open Expression Editor one by one for all output ports and give the
calculations. Ex: MAX(SAL), MIN(SAL), AVG(SAL),SUM(SAL)
In this case, the DEPTNO and SAL of last record of EMP table will be passed to
target.
Here we are not doing any calculation but Group By is there on DEPTNO.
In this case, the last record of every DEPTNO from EMP table will be passed to
target.
In Example 1, we are calculating MAX, MIN, AVG and SUM but we are not doing any
Group By.
In this DEPTNO of last record of EMP table will be passed. The calculations however
will be correct.
In Example 1, we are calculating MAX, MIN, AVG and SUM for every DEPT.
In this DEPTNO and the correct calculations for every DEPTNO will be passed to
target.
Creating Mapping:
1> Open folder where we want to create the mapping.
2> Click Tools -> Mapping Designer.
3> Click Mapping-> Create-> Give mapping name. Ex: m_joiner_example
4> Drag EMP, DEPT, Target. Create Joiner Transformation. Link as shown below.
5> Specify the join condition in Condition tab. See steps on next page.
6> Set Master in Ports tab. See steps on next page.
7> Mapping -> Validate
8> Repository -> Save.
If we join Char and Varchar datatypes, the PowerCenter Server counts any spaces
that pad Char values as part of the string. So if you try to join the following:
Char (40) = “abcd” and Varchar (40) = “abcd”
Then the Char value is “abcd” padded with 36 blank spaces, and the PowerCenter
Server does not join the two fields because the Char field contains trailing spaces.
Types of Joins:
• Normal
• Master Outer
• Detail Outer
• Full Outer
Note: A normal or master outer join performs faster than a full outer or detail outer
join.
Example: In EMP, we have employees with DEPTNO 10, 20, 30 and 50. In DEPT, we
have DEPTNO 10, 20, 30 and 40. DEPT will be MASTER table as it has less rows.
Normal Join:
With a normal join, the PowerCenter Server discards all rows of data from the master
and detail source that do not match, based on the condition.
• All employees of 10, 20 and 30 will be there as only they are matching.
JOINER CACHES
Joiner always caches the MASTER table. We cannot disable caching. It builds Index
cache and Data Cache based on MASTER table.
The entire above are possible in Properties Tab of Source Qualifier t/f.
Creating Mapping:
1> Open folder where we want to create the mapping.
2> Click Tools -> Mapping Designer.
3> Click Mapping-> Create-> Give mapping name. Ex: m_SQ_example
4> Drag EMP, DEPT, Target.
5> Right Click SQ_EMP and Select Delete from the mapping.
6> Right Click SQ_DEPT and Select Delete from the mapping.
7> Click Transformation -> Create -> Select Source Qualifier from List -> Give
Name -> Click Create
8> Select EMP and DEPT both. Click OK.
9> Link all as shown in above picture.
10> Edit SQ -> Properties Tab -> Open User defined Join -> Give Join condition
EMP.DEPTNO=DEPT.DEPTNO. Click Apply -> OK
(More details after 2 pages)
11> Mapping -> Validate
12> Repository -> Save
SQ PROPERTIES TAB
1> SOURCE FILTER:
We can enter a source filter to reduce the number of rows the PowerCenter
Server queries.
Note: When we enter a source filter in the session properties, we override the
customized SQL query in the Source Qualifier transformation.
Steps:
1> In the Mapping Designer, open a Source Qualifier transformation.
2> Select the Properties tab.
3> Click the Open button in the Source Filter field.
4> In the SQL Editor Dialog box, enter the filter. Example: EMP.SAL>2000
5> Click OK.
Validate the mapping. Save it. Now refresh session and save the changes. Now
run the workflow and see output.
Steps:
1> In the Mapping Designer, open a Source Qualifier transformation.
2> Select the Properties tab.
3> Enter any number instead of zero for Number of Sorted ports.
4> Click Apply -> Click OK.
Validate the mapping. Save it. Now refresh session and save the changes. Now
run the workflow and see output.
• We can specify equi join, left outer join and right outer join only. We
cannot specify full outer join. To use full outer join, we need to write
SQL Query.
Steps:
1> Open the Source Qualifier transformation, and click the Properties tab.
2> Click the Open button in the User Defined Join field. The SQL Editor Dialog
box appears.
3>Enter the syntax for the join.
Validate the mapping. Save it. Now refresh session and save the changes. Now
run the workflow and see output.
In mapping above, we are passing only SAL and DEPTNO from SQ_EMP to
Aggregator transformation. Default query generated will be:
• SELECT EMP.SAL, EMP.DEPTNO FROM EMP
4. The SQL Editor displays the default query the PowerCenter Server uses to
select source data.
5. Click Cancel to exit.
Note: If we do not cancel the SQL query, the PowerCenter Server overrides
the default query with the custom SQL query.
We can enter an SQL statement supported by our source database. Before entering
the query, connect all the input and output ports we want to use in the mapping.
Example: As in our case, we can’t use full outer join in user defined join, we can
write SQL query for FULL OUTER JOIN:
• We also added WHERE clause. We can enter more conditions and write
more complex SQL.
We can write any query. We can join as many tables in one query as
required if all are in same database. It is very handy and used in most of the
projects.
Important Points:
• When creating a custom SQL query, the SELECT statement must list the
port names in the order in which they appear in the transformation.
Example: DEPTNO is top column; DNAME is second in our SQ mapping.
So when we write SQL Query, SELECT statement have name DNAME
first, DNAME second and so on. SELECT DEPT.DEPTNO, DEPT.DNAME …
• Once we have written a custom query like above, then this query will
always be used to fetch data from database. In our example, we used
WHERE SAL>2000. Now if we use Source Filter and give condition
SAL>1000 or any other, then it will not work. Informatica will always
use the custom query only.
• Make sure to test the query in database first before using it in SQL
Query. If query is not running in database, then it won’t work in
Informatica too.
• Also always connect to the database and validate the SQL in SQL query
editor.
3.16 LOOKUP TRANSFORMATION
• Passive Transformation
• Can be Connected or Unconnected. Dynamic lookup is connected.
• Use a Lookup transformation in a mapping to look up data in a flat file or a
relational table, view, or synonym.
• We can import a lookup definition from any flat file or relational database to
which both the PowerCenter Client and Server can connect.
• We can use multiple Lookup transformations in a mapping.
The PowerCenter Server queries the lookup source based on the lookup ports in the
transformation. It compares Lookup transformation port values to lookup source
column values based on the lookup condition. Pass the result of the lookup to other
transformations and a target.
Relational Lookup:
When we create a Lookup transformation using a relational table as a lookup source,
we can connect to the lookup source using ODBC and import the table definition as
the structure for the Lookup transformation.
• We can override the default SQL statement if we want to add a WHERE clause
or query multiple tables.
• We can use a dynamic lookup cache with relational lookups.
Receives input values directly from Receives input values from the result of a
the pipeline. :LKP expression in another transformation.
Cache includes all lookup columns Cache includes all lookup/output ports in the
used in the mapping. lookup condition and the lookup/return port.
If there is no match for the lookup If there is no match for the lookup condition,
condition, the PowerCenter Server the PowerCenter Server returns NULL.
returns the default value for all
output ports.
If there is a match for the lookup If there is a match for the lookup condition,
condition, the PowerCenter Server the PowerCenter Server returns the result of
returns the result of the lookup the lookup condition into the return port.
condition for all lookup/output ports.
1. Lookup Source:
We can use a flat file or a relational table for a lookup source. When we create a
Lookup t/f, we can import the lookup source from the following locations:
• Any relational source or target definition in the repository
• Any flat file source or target definition in the repository
• Any table or file that both the PowerCenter Server and Client machine can
connect to
The lookup table can be a single table, or we can join multiple tables in the same
database using a lookup SQL override in Properties Tab.
2. Ports:
4: Condition Tab
We enter the Lookup Condition. The PowerCenter Server uses the lookup condition to
test incoming values. We compare transformation input values with values in the
lookup source or cache, represented by lookup ports.
Tip: If we include more than one lookup condition, place the conditions with an equal
sign first to optimize lookup performance.
Note:
1. We can use = operator in case of Dynamic Cache.
2. The PowerCenter Server fails the session when it encounters multiple keys for
a Lookup transformation configured to use a dynamic cache.
3.16.3 Connected Lookup Transformation
Example: To create a connected Lookup Transformation
Creating Mapping:
1. Open folder where we want to create the mapping.
2. Click Tools -> Mapping Designer.
3. Click Mapping-> Create-> Give name. Ex: m_CONN_LOOKUP_EXAMPLE
4. Drag EMP and Target table.
5. Connect all fields from SQ_EMP to target except DNAME and LOC.
6. Transformation-> Create -> Select LOOKUP from list. Give name and click
Create.
7. The Following screen is displayed.
8. As DEPT is the Source definition, click Source and then Select DEPT.
12> We are not passing IN_DEPTNO and DEPTNO to any other transformation
from LOOKUP; we can edit the lookup transformation and remove the
OUTPUT check from them.
13> Mapping -> Validate
14> Repository -> Save
We use Connected Lookup when we need to return more than one column from
Lookup table.
There is no use of Return Port in Connected Lookup.
The Integration Service also creates cache files by default in the $PMCacheDir. If the
data does not fit in the memory cache, the IS stores the overflow values in the cache
files. When session completes, IS releases cache memory and deletes the cache files.
• If we use a flat file lookup, the IS always caches the lookup source.
• We set the Cache type in Lookup Properties.
2. Dynamic Cache
To cache a target table or flat file source and insert new rows or update existing
rows in the cache, use a Lookup transformation with a dynamic cache.
The IS dynamically inserts or updates data in the lookup cache and passes data
to the target.
Target table is also our lookup table. No good for performance if table is huge.
3. Persistent Cache
If the lookup table does not change between sessions, we can configure the
Lookup transformation to use a persistent lookup cache.
The IS saves and reuses cache files from session to session, eliminating the time
required to read the lookup table.
Session Configuration:
Edit Session -> Properties -> Treat Source Rows as: (Insert, Update, Delete,
and Data Driven). Insert is default.
Specifying Operations for Individual Target Tables:
You can set the following update strategy options:
Insert: Select this option to insert a row into a target table.
Delete: Select this option to delete a row from a table.
Update: We have the following options in this situation:
o Update as Update. Update each row flagged for update if it exists in
the target table.
o Update as Insert. Inset each row flagged for update.
o Update else Insert. Update the row if it exists. Otherwise, insert it.
Truncate table: Select this option to truncate the target table before loading
data.
Steps:
1. Create Update Strategy Transformation
2. Pass all ports needed to it.
3. Set the Expression in Properties Tab.
4. Connect to other transformations or target.
3.18 DYNAMIC CACHE WORKING
We can use a dynamic cache with a relational lookup or a flat file lookup. For
relational lookups, we might configure the transformation to use a dynamic cache
when the target table is also the lookup table. For flat file lookups, the dynamic
cache represents the data to update in the target table.
When the Integration Service reads a row from the source, it updates the lookup
cache by performing one of the following actions:
• Inserts the row into the cache if row is not in cache.
• Updates the row in the cache if row has changed.
• Makes no change to the cache if there is no change in row.
To use Dynamic Cache, first Edit Lookup Transformation -> Properties Tab ->
• Select Dynamic Cache Option
• Also Select Insert Else Update or Update Else Insert Option
Associated Port:
Associate lookup ports with either an input/output port or a sequence ID. Each
Lookup Port is associated with a source port so that it can compare the changes.
Also, we can generate of Sequence 1, 2, 3 and so on with it. Sequence ID is
available when datatype is Integer or Small Int.
Ignore Null Inputs for Updates
We can set this property for every column. We just need to CHECK the port for
which we want to use this property.
Suppose, in target the COMM of an Employee is 500 but in Source the new COMM is
NULL, and we do not want the NULL to be updated in target. We use the above
property for it.
Ignore In Comparison:
When we do not want to compare any column in source with target, then we can use
this option. Ex: Hiredate will be always same so no need to compare.
In above:
• The top most port is NewLookupRow. It’s hidden.
• All Lookup table ports have been PREV_ before them.
• ENAME has been associated with PREV_ENAME and so are others.
• PREV_COMM port has been checked for Ignore Null Inputs for updates.
• PREV_HIREDATE has been checked for Ignore in Comparison.
Example: Working with Dynamic Cache using Update Strategy.
• EMP will be source table.
• Create a target table DYNAMIC_LOOKUP. Structure same as EMP. Make
EMPNO as Primary Key.
• Create Shortcuts as necessary.
Creating Mapping:
• Create Session and Workflow as usual. First time all rows will be inserted.
• Now Change the data of target table in Oracle and Run workflow again.
You can see how the data is updated as per the properties selected.
• SESSION WILL FAIL FOR THIS. SEE 3.19 LOOKUP QUERY
We pass the data from Lookup Cache and not source to Filter. This is because the
Cache is updated regularly and contains the most updated data.
Example of cache:
Source:
EMPNO Name SAL DEPTNO
9000 Amit Kumar 9000 10
9001 Rahul Singh 9500 20
9002 Sanjay 8000 30
9003 Sumit Singh 7000 20
Initial Cache:
NewlookupRow EMPNO Name SAL DEPTNO
9000 Amit Kumar 8000 10
9001 Rahul Singh 9500 20
Updated Cache:
NewlookupRow EMPNO Name SAL DEPTNO
2 9000 Amit Kumar 9000 10
0 9001 Rahul Singh 9500 20
1 9002 Sanjay 8000 30
1 9003 Sumit Singh 7000 20
3.19 LOOKUP QUERY
The workflow for DYNAMIC CACHE will fail. The reason for this is LOOKUP Query.
We can see the default Lookup Query in Properties tab in Lookup Override.
Steps:
1. Edit-> Lookup Transformation-> Properties Tab
2. Lookup SQL Override -> Generate SQL
This query has been generated because the column names have been prefixed with PREV_ in
ports tab. The Dynamic_Lookup table has no column as PREV_ENAME and so on.
So we need to write correct query here.
SELECT
Dynamic_Lookup.ENAME as PREV_ENAME,
Dynamic_Lookup.JOB as PREV_JOB,
Dynamic_Lookup.MGR as PREV_MGR,
Dynamic_Lookup.HIREDATE as PREV_HIREDATE,
Dynamic_Lookup.SAL as PREV_SAL,
Dynamic_Lookup.COMM as PREV_COMM,
Dynamic_Lookup.DEPTNO as PREV_DEPTNO,
Dynamic_Lookup.EMPNO as PREV_EMPNO
FROM Dynamic_Lookup
See the convention here. ENAME is column name in table in Database but in Lookup
the column name is PREV_ENAME. So we need to write query in such way that
column name in table is matched to port name in Lookup.
• Also if in above, we will not write AS, then lookup will not work and an ERROR
TE_7001 is displayed. It is mandatory to write AS after a column in lookup.
SELECT:
The SELECT statement includes all the lookup ports in the mapping. You can view the
SELECT statement by generating SQL using the Lookup SQL Override property.
ORDER BY:
The ORDER BY clause orders the columns in the same order they appear in the
Lookup transformation. The Integration Service generates the ORDER BY clause.
• We cannot view this when you generate the default SQL using the
Lookup SQL Override property.
• We can see this after we run workflow and then view Session Log.
• To increase performance, we can suppress the default ORDER BY clause and
enter an override ORDER BY with fewer columns.
• Place two dashes `--' after the ORDER BY override to suppress the
generated ORDER BY clause.
Example: Select A as A, B as B, C as C from ABC ORDER BY A--
First Create a Lookup and all the ports needed manually or create a lookup on table
having maximum number of columns in query.
Make sure the ports have been named correctly. Say there are 4 columns A,B,C & D.
Query:
SELECT A as A, B as B, C as C, D as D
FROM (select a, b, c, d from ABC, xyz, dsf, jhk where …)
In the bracket write query involving any number of tables. Make sure that the query
is working in Database and then use it here. Also make sure the sequence in which
columns are being returned.
3.20 Lookup and Update Strategy Examples
Example1: To insert if record is not present in target
and Update if record has changed.
• EMP will be source table.
• Create a target table INS_UPD_NO_PK_EXAMPLE. Structure same as EMP.
• Create Shortcuts as necessary.
Steps:
1. Edit target table INS_UPD_NO_PK_EXAMPLE1. This is copy of table definition.
2. Properties Tab -> Update Override -> Generate SQL.
Status codes: Status codes provide error handling for the IS during a workflow. The
stored procedure issues a status code that notifies whether or not the stored
procedure completed successfully. We cannot see this value. The IS uses it to
determine whether to continue running the session or stop.
Pre-load of the Source: Before the session retrieves data from the source, the
stored procedure runs. This is useful for verifying the existence of tables or
performing joins of data in a temporary table.
Post-load of the Source: After the session retrieves data from the source, the
stored procedure runs. This is useful for removing temporary tables.
Pre-load of the Target: Before the session sends data to the target, the stored
procedure runs. This is useful for dropping indexes or disabling constraints.
Post-load of the Target: After the session sends data to the target, the stored
procedure runs. This is useful for re-creating indexes on the database.
Using a Stored Procedure in a Mapping
1. Create the stored procedure in the database.
2. Import or create the Stored Procedure transformation.
3. Determine whether to use the transformation as connected or unconnected.
4. If connected, map the appropriate input and output ports.
5. If unconnected, either configure the stored procedure to run pre- or post-
session, or configure it to run from an expression in another transformation.
6. Configure the session.
Stored Procedures:
Connect to Source database and create the stored procedures given below:
CREATE OR REPLACE procedure sp_agg (in_deptno in number, max_sal out number,
min_sal out number, avg_sal out number, sum_sal out number)
As
Begin
select max(Sal),min(sal),avg(sal),sum(sal) into max_sal,min_sal,avg_sal,sum_sal
from emp where deptno=in_deptno group by deptno;
End;
/
Creating Mapping:
1. Open folder where we want to create the mapping.
2. Click Tools -> Mapping Designer.
3. Click Mapping-> Create-> Give name. Ex: m_SP_CONN_EXAMPLE
4. Drag DEPT and Target table.
5. Transformation -> Import Stored Procedure -> Give Database Connection ->
Connect -> Select the procedure sp_agg from the list.
6. Drag DEPTNO from SQ_DEPT to the stored procedure input port and also to
DEPTNO port of target.
7. Connect the ports from procedure to target as shown below:
Creating Mapping:
1. Open folder where we want to create the mapping.
2. Click Tools -> Mapping Designer.
3. Click Mapping-> Create-> Give name. Ex: m_sp_unconn_1_value
4. Drag DEPT and Target table.
5. Transformation -> Import Stored Procedure -> Give Database Connection ->
Connect -> Select the procedure sp_unconn_1_value from the list. Click OK.
6. Stored Procedure has been imported.
7. T/F -> Create Expression T/F. Pass DEPTNO from SQ_DEPT to Expression T/F.
8. Edit expression and create an output port OUT_MAX_SAL of decimal datatype.
9. Open Expression editor and call the stored procedure as below:
Click OK and connect the port from expression to target as in mapping below:
10. Mapping -> Validate
11. Repository Save.
PROC_RESULT use:
• If the stored procedure returns a single output parameter or a return value,
we the reserved variable PROC_RESULT as the output variable.
Example: DEPTNO as Input and MAX Sal as output :
:SP.SP_UNCONN_1_VALUE(DEPTNO,PROC_RESULT)
• If the stored procedure returns multiple output parameters, you must create
variables for each output parameter.
Example: DEPTNO as Input and MAX_SAL, MIN_SAL, AVG_SAL and SUM_SAL
as output then:
1. Create four variable ports in expression VAR_MAX_SAL,
VAR_MIN_SAL, VAR_AVG_SAL and VAR_SUM_SAL.
2. Create four output ports in expression OUT_MAX_SAL,
OUT_MIN_SAL, OUT_AVG_SAL and OUT_SUM_SAL.
3. Call the procedure in last variable port says VAR_SUM_SAL.
:SP.SP_AGG (DEPTNO, VAR_MAX_SAL,VAR_MIN_SAL, VAR_AVG_SAL,
PROC_RESULT)
Example 2:
DEPTNO as Input and MAX_SAL, MIN_SAL, AVG_SAL and SUM_SAL as O/P.
Stored Procedure to drop index in Pre Load of Target
Stored Procedure to create index in Post Load of Target
Stored procedures are given below to drop and create index on target.
Make sure to create target table first.
Creating Mapping:
1. Open folder where we want to create the mapping.
2. Click Tools -> Mapping Designer.
3. Click Mapping-> Create-> Give name. Ex: m_sp_unconn_1_value
4. Drag DEPT and Target table.
5. Transformation -> Import Stored Procedure -> Give Database Connection
-> Connect -> Select the procedure sp_agg from the list. Click OK.
6. Stored Procedure has been imported.
7. T/F -> Create Expression T/F. Pass DEPTNO from SQ_DEPT to Expression
T/F.
8. Edit Expression and create 4 variable ports and 4 output ports as
shown below:
9. Call the procedure in last variable port VAR_SUM_SAL.
10. :SP.SP_AGG (DEPTNO, VAR_MAX_SAL, VAR_MIN_SAL, VAR_AVG_SAL,
PROC_RESULT)
11. Click Apply and Ok.
12. Connect to target table as needed.
13. Transformation -> Import Stored Procedure -> Give Database Connection
for target -> Connect -> Select the procedure CREATE_INDEX and
DROP_INDEX from the list. Click OK.
14. Edit DROP_INDEX -> Properties Tab -> Select Target Pre Load as Stored
Procedure Type and in call text write drop_index. Click Apply -> Ok.
15. Edit CREATE_INDEX -> Properties Tab -> Select Target Post Load as
Stored Procedure Type and in call text write create_index. Click Apply ->
Ok.
NEXTVAL:
Use the NEXTVAL port to generate sequence numbers by connecting it to a
transformation or target.
Sequence in Table 1 will be generated first. When table 1 has been loaded, only then
sequence for table 2 will be generated.
CURRVAL:
CURRVAL is NEXTVAL plus the Increment By value.
• We typically only connect the CURRVAL port when the NEXTVAL port is
already connected to a downstream transformation.
• If we connect the CURRVAL port without connecting the NEXTVAL port,
the Integration Service passes a constant value for each row.
• When we connect the CURRVAL port in a Sequence Generator
transformation, the Integration Service processes one row in each block.
We can optimize performance by connecting only the NEXTVAL port in a
mapping.
Example: To use Sequence Generator transformation
• EMP will be source.
• Create a target EMP_SEQ_GEN_EXAMPLE in shared folder. Structure same as
EMP. Add two more ports NEXT_VALUE and CURR_VALUE to the target table.
• Create shortcuts as needed.
Creating Mapping:
1. Open folder where we want to create the mapping.
2. Click Tools -> Mapping Designer.
3. Click Mapping-> Create-> Give name. Ex: m_seq_gen_example
4. Drag EMP and Target table.
5. Connect all ports from SQ_EMP to target table.
6. Transformation -> Create -> Select Sequence Generator for list -> Create ->
Done
7. Connect NEXT_VAL and CURR_VAL from Sequence Generator to target.
8. Validate Mapping
9. Repository -> Save
POINTS:
• If Current value is 1 and end value 10, no cycle option. There are 17 records
in source. In this case session will fail.
• If we connect just CURR_VAL only, the value will be same for all records.
• If Current value is 1 and end value 10, cycle option there. Start value is 0.
There are 17 records in source. Sequence: 1 2 – 10. 0 1 2 3 –
• To make above sequence as 1-10 1-20, give Start Value as 1. Start value is
used along with Cycle option only.
• If Current value is 1 and end value 10, cycle option there. Start value is 1.
There are 17 records in source. Session runs. 1-10 1-7. 7 will be saved in
repository. If we run session again, sequence will start from 8.
• Use reset option if you want to start sequence from CURR_VAL every time.
3.23 MAPPLETS
• A mapplet is a reusable object that we create in the Mapplet Designer.
• It contains a set of transformations and lets us reuse that transformation logic
in multiple mappings.
• Created in Mapplet Designer in Designer Tool.
Mapplet Input:
Mapplet input can originate from a source definition and/or from an Input
transformation in the mapplet. We can create multiple pipelines in a mapplet.
Mapplet Output:
The output of a mapplet is not connected to any target table.
• We must use Mapplet Output transformation to store mapplet output.
• A mapplet must contain at least one Output transformation with at least one
connected port in the mapplet.
Example1: We will join EMP and DEPT table. Then calculate total salary. Give
the output to mapplet out transformation.
Steps:
Creating Mapping
1. Open folder where we want to create the mapping.
2. Click Tools -> Mapping Designer.
3. Click Mapping-> Create-> Give name. Ex: m_mplt_example1
4. Drag mplt_Example1 and target table.
5. Transformation -> Create -> Select Filter for list -> Create -> Done.
6. Drag all ports from mplt_example1 to filter and give filter condition.
7. Connect all ports from filter to target. We can add more transformations
after filter if needed.
8. Validate mapping and Save it.
Steps:
Creating Mapping
1. Open folder where we want to create the mapping.
2. Click Tools -> Mapping Designer.
3. Click Mapping-> Create-> Give name. Ex: m_mplt_example2
4. Drag DEPT, mplt_Example2 and target table.
5. Pass all ports from DEPT to mplt_Example2 for input ports.
6. Transformation -> Create -> Select Filter for list -> Create -> Done.
7. Drag all ports from mplt_example1 to filter and give filter condition.
8. Connect all ports from filter to target. We can add more transformations
after filter if needed.
9. Validate mapping and Save it.
Creating Mapping
1. Open folder where we want to create the mapping.
2. Click Tools -> Mapping Designer.
3. Click Mapping-> Create-> Give name. Ex: m_ Normalizer_Multiple_Records
4. Drag EMP and Target table.
5. Transformation->Create->Select Expression-> Give name, Click create, done.
6. Pass all ports from SQ_EMP to Expression transformation.
7. Transformation-> Create-> Select Normalizer-> Give name, create & done.
8. Try dragging ports from Expression to Normalizer. Not Possible.
9. Edit Normalizer and Normalizer Tab. Add columns. Columns equal to columns
in EMP table and datatype also same.
10. Normalizer doesn’t have DATETIME datatype. So convert HIREDATE to char in
expression t/f. Create output port out_hdate and do the conversion.
11. Connect ports from Expression to Normalizer.
12. Edit Normalizer and Normalizer Tab. As EMPNO identifies source records
and we want 4 records of every employee, give OCCUR for EMPNO as 4.
13. Click Apply and then OK.
14. Add link as shown in mapping below:
Steps:
1. Open Shared Folder -> Tools -> Source Analyzer
2. Sources -> Import XML Definition.
3. Browse for location where XML file is present. To import the definition, we
should have XML file in our local system on which we are working.
4. Select the file and click open.
5. Option for Override Infinite Length is not set. Do you want to set it is
displayed.
6. Click Yes.
7. Check ‘Override all infinite lengths with value’ and give value as 2.
8. Do not modify other options and Click Ok.
9. Click NEXT and then click FINISH
10. Definition has been imported and can be used in mapping as we select other
sources.
SESSION PROPERTIES
• Open the session for mapping where we used XML sources.
• In mapping tab, select the XML source.
• In properties, we do not give relational connection here.
• We give Source File Directory and Source Filename information.
3.26 MAPPING WIZARDS
The Designer provides two mapping wizards to help us create mappings quickly and
easily. Both wizards are designed to create mappings for loading and maintaining
star schemas, a series of dimensions related to a central fact table.
Note: We do not use them in projects and instead make the mappings manually.
Steps:
1. Open the folder where we want to create the mapping.
2. In the Mapping Designer, click Mappings > Wizards > Getting Started.
3. Enter a mapping name and select Simple Pass Through, and click next.
4. Select a source definition to use in the mapping.
5. Enter a name for the mapping target table and click Finish.
6. To save the mapping, click Repository > Save.
2. SLOWLY GROWING TARGET
• Loads a slowly growing fact or dimension table by inserting new rows.
• Use this mapping to load new data when existing data does not require
updates.
• The Slowly Growing Target mapping filters source rows based on user-defined
comparisons, and then inserts only those found to be new to the target.
Handling Keys: When we use the Slowly Growing Target option, the Designer
creates an additional column in target, PM_PRIMARYKEY. In this column, the
Integration Service generates a primary key for each row written to the target,
incrementing new key values by 1.
Steps:
1. Open the folder where we want to create the mapping.
2. In the Mapping Designer, click Mappings > Wizards > Getting Started.
3. Enter a mapping name and select Slowly Growing Target, and click next.
4. Select a source definition to be used in the mapping.
5. Enter a name for the mapping target table. Click Next.
6. Select the column or columns from the Target Table Fields list that we want
the Integration Service to use to look up data in the target table. Click Add.
These columns are used to compare source and target.
7. Click Finish.
8. To save the mapping, click Repository > Save.
Note: The Fields to Compare for Changes field is disabled for the Slowly
Growing Targets mapping.
Slowly Growing target example
Handling Keys: When we use the SCD Type1 option, the Designer creates an
additional column in target, PM_PRIMARYKEY. Value incremented by +1.
Steps:
1. Open the folder where we want to create the mapping.
2. In the Mapping Designer, click Mappings > Wizards > Slowly Changing
Dimension.
3. Enter a mapping name and select Type 1 Dimension, and click Next.
4. Select a source definition to be used by the mapping.
5. Enter a name for the mapping target table. Click Next.
6. Select the column or columns we want to use as a lookup condition from the
Target Table Fields list and click add.
7. Select the column or columns we want the Integration Service to compare for
changes, and click add.
8. Click Finish.
9. To save the mapping, click Repository > Save.
Configuring Session: In the session properties, click the Target Properties settings
on the Mappings tab. To ensure the Integration Service loads rows to the target
properly, select Insert and Update as Update for each relational target.
Note: In the Type 1 Dimension mapping, the Designer uses two instances of the
same target definition to enable inserting and updating data in the same target
table. Generate only one target table in the target database.
2. SCD TYPE 2 DIMENSION/VERSION DATA MAPPING
• The Type 2 Dimension/Version Data mapping filters source rows based on
user-defined comparisons and inserts both new and changed records into the
target.
• Changes are tracked in the target table by versioning the primary key and
creating a version number for each record in the table.
• In the Type 2 Dimension/Version Data target, the latest record has the
highest version number and the highest incremented primary key.
When we use this option, the Designer creates two additional fields in the target:
1. PM_PRIMARYKEY: The Integration Service generates a primary key for
each row written to the target.
2. PM_VERSION_NUMBER: The IS generates a version number for each row
written to the target.
Steps:
1. Follow Steps 1-7 as we did in SCD Type1, except Select Type 2 Dimension in
Step 3.
2. Click Next. Select Keep the `Version' Number in Separate Column.
3. Click Finish.
4. To save the mapping, click Repository > Save.
Note: Designer uses two instances of the same target definition to enable the two
separate data flows to write to the same target table. Generate only one target table
in the target database.
Configuring Session: In the session properties, click the Target Properties settings
on the Mappings tab. To ensure the Integration Service loads rows to the target
properly, select Insert for each relational target.
When we use this option, the Designer creates two additional fields in the target:
1. PM_PRIMARYKEY: The Integration Service generates a primary key for
each row written to the target.
2. PM_CURRENT_FLAG: The Integration Service flags the current row "1" and
all previous versions "0".
Steps:
1. Follow Steps 1-7 as we did in SCD Type1, except Select Type 2 Dimension in
Step 3.
2. Click Next. Select Mark the `Current' Dimension Record with a Flag.
3. Click Finish.
4. To save the mapping, click Repository > Save.
Note: In the Type 2 Dimension/Flag Current mapping, the Designer uses three
instances of the same target definition to enable the three separate data flows to
write to the same target table. Generate only one target table in the target database.
Configuring Session: In the session properties, click the Target Properties settings
on the Mappings tab. To ensure the Integration Service loads rows to the target
properly, select Insert and Update as Update for each relational target.
When we use this option, the Designer creates 3 additional fields in the target:
1. PM_PRIMARYKEY: The Integration Service generates a primary key for
each row written to the target.
2. PM_BEGIN_DATE: For each new and changed record, it is populated with
SYSDATE. This Sysdate is the date on which ETL process runs.
3. PM_END_DATE: It is populated as NULL when record is inserted. A new
record is inserted when a record changes. However, PM_END_DATE of
changed record is updated with SYSDATE.
Steps:
1. Follow Steps 1-7 as we did in SCD Type1, except Select Type 2 Dimension in
Step 3.
2. Click Next. Select Mark the Dimension Records with their Effective Date
Range.
3. Click Finish.
4. To save the mapping, click Repository > Save.
Flow1: New record is inserted into target table with PM_BEGIN_DATE as SYSDATE.
Flow2: Changed record is inserted into target with PM_BEGIN_DATE as SYSDATE.
Flow2: END_DATE of changed record is updated in target table.
5. SCD TYPE 3 DIMENSION MAPPING
• Inserts new records.
• Updates changed values in existing records. When updating an existing
dimension, the Integration Service saves existing data in different columns of
the same row and replaces the existing data with the updates.
• Optionally uses the load date to track changes.
• It maintains partial history. Only one previous value is changed.
When we use this option, the Designer creates two additional fields in the target:
1. PM_PRIMARYKEY: The Integration Service generates a primary key for
each row written to the target.
2. PM_PREV_ColumnName: The Designer generates a previous column
corresponding to each column for which we want historical data. The IS keeps
the previous version of record data in these columns.
3. PM_EFFECT_DATE: An optional field. The IS uses the system date to
indicate when it creates or updates a dimension.
Steps:
1. Follow Steps 1-7 as we did in SCD Type1, except Select Type 3 Dimension in
Step 3.
2. Click Next. Select Effective Date if desired.
3. Click Finish.
4. To save the mapping, click Repository > Save.
MAPPING PARAMETERS
• A mapping parameter represents a constant value that we can define before
running a session.
• A mapping parameter retains the same value throughout the entire session.
Example: When we want to extract records of a particular month during ETL process,
we will create a Mapping Parameter of data type and use it in query to compare it
with the timestamp field in SQL override.
MAPPING VARIABLES
• Unlike mapping parameters, mapping variables are values that can change
between sessions.
• The Integration Service saves the latest value of a mapping variable to the
repository at the end of each successful session.
• We can override a saved value with the parameter file.
• We can also clear all saved values for the session in the Workflow Manager.
We might use a mapping variable to perform an incremental read of the source. For
example, we have a source table containing timestamped transactions and we want
to evaluate the transactions on a daily basis. Instead of manually entering a session
override to filter source data each time we run the session, we can create a mapping
variable, $$IncludeDateTime. In the source qualifier, create a filter to read only rows
whose transaction date equals $$IncludeDateTime, such as:
TIMESTAMP = $$IncludeDateTime
In the mapping, use a variable function to set the variable value to increment one
day each time the session runs. If we set the initial value of $$IncludeDateTime to
8/1/2004, the first time the Integration Service runs the session, it reads only rows
dated 8/1/2004. During the session, the Integration Service sets $$IncludeDateTime
to 8/2/2004. It saves 8/2/2004 to the repository at the end of the session. The next
time it runs the session, it reads only rows from August 2, 2004.
When the Integration Service needs an initial value, and we did not declare an initial
value for the parameter or variable, the Integration Service uses a default value
based on the datatype of the parameter or variable.
Start Value:
The start value is the value of the variable at the start of the session. The
Integration Service looks for the start value in the following order:
1. Value in parameter file
2. Value saved in the repository
3. Initial value
4. Default value
Current Value:
The current value is the value of the variable as the session progresses. When a
session starts, the current value of a variable is the same as the start value. The
final current value for a variable is saved to the repository at the end of a successful
session. When a session fails to complete, the Integration Service does not update
the value of the variable in the repository.
Note: If a variable function is not used to calculate the current value of a mapping
variable, the start value of the variable is saved to the repository.
SetVariable: Sets the variable to the configured value. At the end of a session, it
compares the final current value of the variable to the start value of the variable.
Based on the aggregate type of the variable, it saves a final value to the repository.
12.
13. Create 5 output ports out_ TOTAL_SAL, out_MAX_VAR, out_MIN_VAR,
out_COUNT_VAR and out_SET_VAR.
14. Open expression editor for TOTAL_SAL. Do the same as we did earlier for
SAL+ COMM. To add $$BONUS to it, select variable tab and select the
parameter from mapping parameter. SAL + COMM + $$Bonus
15. Open Expression editor for out_max_var.
16. Select the variable function SETMAXVARIABLE from left side pane. Select
$$var_max from variable tab and SAL from ports tab as shown below.
SETMAXVARIABLE($$var_max,SAL)
17. Open Expression editor for out_min_var and write the following expression:
SETMINVARIABLE($$var_min,SAL). Validate the expression.
18. Open Expression editor for out_count_var and write the following expression:
SETCOUNTVARIABLE($$var_count). Validate the expression.
19. Open Expression editor for out_set_var and write the following expression:
SETVARIABLE($$var_set,ADD_TO_DATE(HIREDATE,'MM',1)). Validate.
20. Click OK. Expression Transformation below:
21. Link all ports from expression to target and Validate Mapping and Save it.
22. See mapping picture on next page.
In the parameter file, folder and session names are case sensitive.
Create a text file in notepad with name Para_File.txt
[Practice.ST:s_m_MP_MV_Example]
$$Bonus=1000
$$var_max=500
$$var_min=1200
$$var_count=0
Solution1:
1. Import one flat file definition and make the mapping as per need.
2. Now in session give the Source File name and Source File Directory location of
one file.
3. Make workflow and run.
4. Now open session after workflow completes. Change the Filename and
Directory to give information of second file. Run workflow again.
5. Do the above for all 10 files.
Solution2:
1. Import one flat file definition and make the mapping as per need.
2. Now in session give the Source Directory location of the files.
3. Now in Fieldname use $InputFileName. This is a session parameter. See
4.2.4 for session parameters.
4. Now make a parameter file and give the value of $InputFileName.
$InputFileName=EMP1.txt
5. Run the workflow
6. Now edit parameter file and give value of second file. Run workflow again.
7. Do same for remaining files.
Solution3:
1. Import one flat file definition and make the mapping as per need.
2. Now make a notepad file that contains the location and name of each 10 flat
files.
Sample:
D:\EMP1.txt
E:\EMP2.txt
E:\FILES\DWH\EMP3.txt and so on
3. Now make a session and in Source file name and Source File Directory
location fields, give the name and location of above created file.
4. In Source filetype field, select Indirect.
5. Click Apply.
6. Validate Session
7. Make Workflow. Save it to repository and run.
Workflow
Manager
4.1 INTEGRATION SERVICE ARCHITECTURE
• The Integration Service moves data from sources to targets based on
workflow and mapping metadata stored in a repository.
• When a workflow starts, the Integration Service retrieves mapping, workflow,
and session metadata from the repository. It extracts data from the mapping
sources and stores the data in memory while it applies the transformation
rules configured in the mapping.
• The Integration Service loads the transformed data into one or more targets.
To move data from sources to targets, the Integration Service uses the following
components:
• Integration Service process
• Load Balancer
• Data Transformation Manager (DTM) process
The Load Balancer dispatches tasks in the order it receives them. When the Load
Balancer needs to dispatch more Session and Command tasks than the Integration
Service can run, it places the tasks it cannot run in a queue. When nodes become
available, the Load Balancer dispatches tasks from the queue in the order
determined by the workflow service level.
The Integration Service can move data in either ASCII or Unicode data movement
mode. These modes determine how the Integration Service handles character data.
We choose the data movement mode in the Integration Service configuration
settings. If we want to move multibyte data, choose Unicode data movement mode.
ASCII Data Movement Mode: In ASCII mode, the Integration Service recognizes
7-bit ASCII and EBCDIC characters and stores each character in a single byte.
Unicode Data Movement Mode: Use Unicode data movement mode when sources
or targets use 8-bit or multibyte character sets and contain character data.
Session Details: When we run a session, the Workflow Manager creates session
details that provide load statistics for each target in the mapping. We can monitor
session details during the session or after the session completes. Session details
include information such as table name, number of rows written or rejected, and
read and write throughput.
Control File: When we run a session that uses an external loader, the Integration
Service process creates a control file and a target flat file. The control file contains
information about the target flat file such as data format and loading instructions for
the external loader. The control file has an extension of .ctl. We can view the control
file and the target flat file in the target file directory
Output File: If the session writes to a target file, the Integration Service process
creates the target file based on a file target definition.
Cache Files: When the Integration Service process creates memory cache, it also
creates cache files. The Integration Service process creates cache files Joiner, Rank,
Lookup, Aggregator, Sorter transformations and XML target.
4.2 WORKING WITH WORKFLOWS
A workflow is a set of instructions that tells the Integration Service how to run tasks
such as sessions, email notifications, and shell commands.
Valid Workflow:
Example of loop:
Steps:
1. In the Workflow Designer workspace, double-click the link you want to
specify.
2. The Expression Editor appears.
3. In the Expression Editor, enter the link condition. The Expression Editor
provides predefined workflow variables, user-defined workflow variables,
variable functions, and Boolean and arithmetic operators.
4. Validate the expression using the Validate button.
The Workflow Manager provides an Expression Editor for any expressions in the
workflow. We can enter expressions using the Expression Editor for the following:
• Link conditions
• Decision task
• Assignment task
4.2.3 WORKFLOW VARIABLES
We can create and use variables in a workflow to reference values and record
information.
Types of workflow variables:
• Predefined workflow variables
• User-defined workflow variables
System variables:
Use the SYSDATE and WORKFLOWSTARTTIME system variables within a workflow.
Task-specific variables:
The Workflow Manager provides a set of task-specific variables for each task in the
workflow. The Workflow Manager lists task-specific variables under the task name in
the Expression Editor.
Integration Service holds two different values for a workflow variable during a
workflow run:
• Start value of a workflow variable
• Current value of a workflow variable
The Integration Service looks for the start value of a variable in the following order:
1. Value in parameter file
2. Value saved in the repository (if the variable is persistent)
3. User-specified default value
4. Datatype default value
6. Enter the default value for the variable in the Default field.
7. To validate the default value of the new workflow variable, click the Validate
button.
8. Click Apply to save the new workflow variable.
9. Click OK to close the workflow properties.
4.2.4 SESSION PARAMETERS
• Session parameters represent values we might want to change between
sessions, such as a database connection or source file.
• Use session parameters in the session properties, and then define the
parameters in a parameter file.
• The Workflow Manager provides one built-in session parameter,
$PMSessionLogFile. Used to change the name of log file.
• Source file, target file, lookup file, reject file parameters are used for Flat
Files.
Similarly give the parameter for target and reject file in Target properties.
For Lookup file parameter, select Lookup file in Transformations node and
give the parameter there for Lookup file name.
4.3 WORKING WITH TASKS
The Workflow Manager contains many types of tasks to help you build workflows and
worklets. We can create reusable tasks in the Task Developer.
Types of tasks:
Steps:
1. In the Task Developer or Workflow Designer, choose Tasks-Create.
2. Select an Email task and enter a name for the task. Click Create.
3. Click Done.
4. Double-click the Email task in the workspace. The Edit Tasks dialog box
appears.
5. Click the Properties tab.
6. Enter the fully qualified email address of the mail recipient in the Email User
Name field.
7. Enter the subject of the email in the Email Subject field. Or, you can leave
this field blank.
8. Click the Open button in the Email Text field to open the Email Editor.
9. Click OK twice to save your changes.
Example: To send an email when a session completes:
Steps:
1. Create a workflow wf_sample_email
2. Drag any session task to workspace.
3. Edit Session task and go to Components tab.
4. See On Success Email Option there and configure it.
5. In Type select reusable or Non-reusable.
6. In Value, select the email task to be used.
7. Click Apply -> Ok.
8. Validate workflow and Repository -> Save
• We can also drag the email task and use as per need.
• We can set the option to send email on success or failure in
components tab of a session task.
1. Create a task using the above steps to copy a file in Task Developer.
2. Open Workflow Designer. Workflow -> Create -> Give name and click ok.
3. Start is displayed. Drag session say s_m_Filter_example and command task.
4. Link Start to Session task and Session to Command Task.
5. Double click link between Session and Command and give condition in editor
as
6. $S_M_FILTER_EXAMPLE.Status=SUCCEEDED
7. Workflow-> Validate
8. Repository -> Save
4.3.4 WORKING WITH EVENT TASKS
We can define events in the workflow to specify the sequence of task execution.
Types of Events:
• Pre-defined event: A pre-defined event is a file-watch event. This event
waits for a specified file to arrive at a given location.
• User-defined event: A user-defined event is a sequence of tasks in the
workflow. We create events and then raise them as per need.
Example1: Use an event wait task and make sure that session s_filter_example
runs when abc.txt file is present in D:\FILES folder.
Example: Run session s_m_filter_example relative to 1 min after the timer task.
1. Workflow -> Create -> Give name wf_decision_task_example -> Click ok.
2. Drag s_m_filter_example and S_M_TOTAL_SAL_EXAMPLE to workspace and
link both of them to START task.
3. Click Tasks -> Create -> Select DECISION from list. Give name
DECISION_Example. Click Create and then done. Link DECISION_Example to
both s_m_filter_example and S_M_TOTAL_SAL_EXAMPLE.
4. Right click DECISION_Example-> EDIT -> GENERAL tab.
5. Set ‘Treat Input Links As’ to OR. Default is AND. Apply and click OK.
6. Now edit decision task again and go to PROPERTIES Tab. Open the Expression
editor by clicking the VALUE section of Decision Name attribute and enter the
following condition: $S_M_FILTER_EXAMPLE.Status = SUCCEEDED OR
$S_M_TOTAL_SAL_EXAMPLE.Status = SUCCEEDED
7. Validate the condition -> Click Apply -> OK.
8. Drag command task and S_m_sample_mapping_EMP task to workspace and
link them to DECISION_Example task.
9. Double click link between S_m_sample_mapping_EMP & DECISION_Example
& give the condition: $DECISION_Example.Condition = 0. Validate & click OK.
10. Double click link between Command task and DECISION_Example and give
the condition: $DECISION_Example.Condition = 1. Validate and click OK.
11. Workflow Validate and repository Save.
12. Run workflow and see the result.
4.3.7 CONTROL TASK
• We can use the Control task to stop, abort, or fail the top-level workflow or
the parent workflow based on an input link condition.
• A parent workflow or worklet is the workflow or worklet that contains the
Control task.
• We give the condition to the link connected to Control Task.
Example: Drag any 3 sessions and if anyone fails, then Abort the top level workflow.
We can use the User Defined Variable in our link conditions as per the need
and also calculate or set the value of variable in Assignment Task.
4.4 SCHEDULERS
We can schedule a workflow to run continuously, repeat at a given time or interval,
or we can manually start a workflow. The Integration Service runs a scheduled
workflow as configured.
By default, the workflow runs on demand. We can change the schedule settings
by editing the scheduler. If we change schedule settings, the Integration Service
reschedules the workflow according to the new settings.
Steps:
1. Open the folder where we want to create the scheduler.
2. In the Workflow Designer, click Workflows > Schedulers.
3. Click Add to add a new scheduler.
4. In the General tab, enter a name for the scheduler.
5. Configure the scheduler settings in the Scheduler tab.
6. Click Apply and OK.
2. Run Continuously:
Integration Service runs the workflow as soon as the service initializes. The
Integration Service then starts the next run of the workflow as soon as it finishes
the previous run.
4. Click the right side of the Scheduler field to edit scheduling settings for the
non- reusable scheduler
Some Points:
• To remove a workflow from its schedule, right-click the workflow in the
Navigator window and choose Unschedule Workflow.
• To reschedule a workflow on its original schedule, right-click the workflow in
the Navigator window and choose Schedule Workflow.
4.5 WORKLETS
• A worklet is an object that represents a set of tasks that we create in the
Worklet Designer.
• Create a worklet when we want to reuse a set of workflow logic in more than
one workflow.
• To run a worklet, include the worklet in a workflow.
• Worklet is created in the same way as we create Workflows. Tasks are also
added in the same way as we do in workflows. We can link tasks and give link
conditions in same way.
Some Points:
• We cannot run two instances of the same worklet concurrently in the same
workflow.
• We cannot run two instances of the same worklet concurrently across two
different workflows.
• Each worklet instance in the workflow can run once.
4.6 PARTITIONING
• A pipeline consists of a source qualifier and all the transformations and
targets that receive data from that source qualifier.
• When the Integration Service runs the session, it can achieve higher
performance by partitioning the pipeline and performing the extract,
transformation, and load for each partition in parallel.
2. Number of Partitions
• We can define up to 64 partitions at any partition point in a pipeline.
• When we increase or decrease the number of partitions at any partition point,
the Workflow Manager increases or decreases the number of partitions at all
partition points in the pipeline.
• Increasing the number of partitions or partition points increases the number
of threads.
• The number of partitions we create equals the number of connections to the
source or target. For one partition, one database connection will be used.
3. Partition types
• The Integration Service creates a default partition type at each partition
point.
• If we have the Partitioning option, we can change the partition type. This
option is purchased separately.
• The partition type controls how the Integration Service distributes data
among partitions at partition points.
4.6.2 PARTITIONING TYPES
1. Round Robin Partition Type
• In round-robin partitioning, the Integration Service distributes rows of data
evenly to all partitions.
• Each partition processes approximately the same number of rows.
• Use round-robin partitioning when we need to distribute rows evenly and do
not need to group data among partitions.
2. If we partition a session with a flat file target the Informatica server creates one
target file for each partition. We can configure session properties to merge these
target files into one.
4.7 SESSION PROPERTIES
1. GENERAL TAB
By default, the General tab appears when we edit a session task.
General Tab has following options:
• Rename: Optional and can be used to rename a session.
• Description: Optional and provides a description for session.
• Mapping name: Required and represents name of the mapping associated
with the session task.
• Fail Parent if this task fails: Optional and Fails the parent worklet or
workflow if this task fails.
• Fail parent if this task does not run: Optional and Fails the parent worklet
or workflow if this task does not run.
• Disable this task: Optional and Disables the task. Treat the input links as
AND or OR: Required and Runs the task when all or one of the input link
conditions evaluate to True.
2. PROPERTIES TAB
Property Required/ Description
Optional
Write Backward Compatible Optional Select to write session log to a file.
Session Log File
Session Log File Name Optional
Session Log File Directory Required Location where session log is created.
Parameter File Name Optional Name and location of parameter file.
Enable Test Load Optional To test a mapping.
Number of Rows to Test Optional Number of rows of source data to test.
$Source Connection Value Optional Enter the database connection we want
to use for $Source variable.
$Target Connection Value Optional Enter the database connection we want
to use for $Target variable.
Treat Source Rows As Required Indicates how the IS treats all source
rows. Can be Insert, Update, Delete or
Data Driven
Commit Type Required Determines whether the Integration
Service uses a source- or target-based
or user-defined commit.
Commit Interval Required Indicates the number of rows after
which commit is fired.
Recovery Strategy Required See on Next Page
The Integration Service recovers tasks in the workflow based on the recovery
strategy of the task.
• By default, the recovery strategy for Session and Command tasks is to fail the
task and continue running the workflow.
• We can configure the recovery strategy for Session and Command tasks.
• The strategy for all other tasks is to restart the task.
Recovery Options
• Suspend Workflow on Error: Available in Workflow
• Suspension Email: Available in Workflow
• Enable HA Recovery: Available in Workflow
• Automatically Recover Terminated Tasks: Available in Workflow
• Maximum Automatic Recovery Attempts: Available in Workflow
• Recovery Strategy: Available in Session and Command
• Fail Task If Any Command Fails: Available in Command
3. CONFIG OBJECT TAB
We can configure the following settings in the Config Object tab:
• Advanced. Advanced settings allow you to configure constraint-based
loading, lookup caches, and buffer sizes.
• Log Options. Log options allow you to configure how you want to save the
session log.
• Error Handling. Error Handling settings allow us to determine if the session
Stops or continues when it encounters pre-session command errors, stored
procedure errors, or a specified number of session errors.
• Partitioning Options. Partitioning options allow the Integration Service to
determine the number of partitions to create at run time.
Constraint-based loading:
Enable this when in same mapping, there are two target tables and both are related
to each other in PK and FK relation. One is master table and other is child table.
When enabled, Informatica will write first to master table and then to child table
maintaining referential integrity.
Dynamic Partitioning:
Can configure dynamic partitioning using one of the following methods:
Disabled
• Based on number of partitions
• Based on number of nodes in grid
• Based on source partitioning
6. COMPONENTS TAB
In the Components tab, we can configure the following:
• Pre-Session Command
• Post-Session Success Command
• Post-Session Failure Command
• On Success Email
• On Failure Email
4.8 WORKFLOW PROPERTIES
1. GENERAL TAB
Property Required/ Description
Optional
Name Required Name of workflow
Comments Optional Comment that describes the workflow.
Integration Service Required Integration Service that runs the
workflow by default.
Suspension Email Optional Email message that the Integration
Service sends when a task fails and the
Integration Service suspends the
workflow.
Disabled Optional Disables the workflow from the
schedule.
Suspend on Error Optional The Integration Service suspends the
workflow when a task in the workflow
fails.
2. PROPERTIES TAB
Properties tab has the following options:
• Parameter File Name
• Write Backward Compatible Workflow Log File: Select to write workflow
log to a file. It is Optional.
• Workflow Log File Name
• Workflow Log File Directory
• Save Workflow Log By: Required and Options are By Run and By
Timestamp
• Save Workflow Log For These Runs: Required. How many logs needs to
be saved for a workflow.
• Enable HA Recovery: Not required.
• Automatically recover terminated tasks: Not required.
• Maximum automatic recovery attempts: Not required.
3. SCHEDULER TAB
The Scheduler Tab lets us schedule a workflow to run continuously, run at a
given interval, or manually start a workflow.
4. VARIABLE TAB
It is used to declare User defined workflow variables.
5. EVENTS TAB
Before using the Event-Raise task, declare a user-defined event on the Events
tab.