Vous êtes sur la page 1sur 71

SSIS

(SQL SERVER INTEGRATION SERVICES)


SSIS is SQL SERVER INTEGRATION SERVICES and this tool is used to load
the required data from different sources into destinations by applying
different transformations this entire process is called as ETL (EXTRACTION
TRANSFORMATION LOADING)

This SSIS is a combination of four tabs.


Control flow
Data flow
Event handler
Package explorer

CONTROL FLOW: The execution flow of the package always start from the
control flow level then after it will move to the DATA
FLOW.

This control flow is a combination of different types of


CONTAINERS and TASKS.

DATA FLOW: Data flow is purely used for data loading purpose, whenever
we want to load the data from source to destination we will
use these data flows.

This is a combination of different types of SOURCES,


TRANSFORMATION and DESTINATIONS.

EVENT HANDLER: Whenever an event got occurred in the package level or task
level then automatically event handler will be raised and
the specified operation will be performed which we
configured in it.

PACKAGE EXPLORE: This package explorer will give you the package information
like TASKS, CONTAINERS, and VARIABLESetc.
HOW TO CONNECT SSIS IN 2008

GOTO START MENU

SELECT ALL PROGRAMS

SELECT AND EXPAND SQL SERVER 2008 AND SELECT SQL SERVER BUSSINESS
INTELLIGENT DEVELOPMENT STUDIO

GOTO FILE MENU

CLICK ON NEW

CLICK ON PROJECT

SELECT INTRGRATION SERVICES PROJECT IN THE LIST

PROVIDE THE REQUIRED PROJECT NAME, AND BROWS THE LOCATION


WHERE YOU WANT TO CREATE YOUR PACKAGE

CLICK ON OK

RIGHT CLICK ON THE SSIS IN THIS SOLUTION EXPLORER

CLICK ON NEW SSIS PACKAGE

DESIGN THE PACKAGE AS WE REQUIRED


HOW TO CONNECT SSIS IN 2012
ALL PROGRAMS
SELECT SQL SERVER DATA TOOLS
CLICK ON NEW PROJECT OPTION
EXPAND BUSSINESS INTELLIGENCE IN LEFT SIDE
SELECT INTEGRATION SERVICES IN IT
SELECT INTEGRATION SERVICES PROJECT IN THE LIST
PROVIDE THE PROJECT NAME
BROWSE THE LOCATION WHERE YOU WANT TO CREATE THE PACKAGE
CLICK ON OK
THEN AUTOMATICALLY PROJECT WILL BE CREATED FOR SSIS

DATE FLOW SOURCES: we have different kind of sources to extract the data
from corresponding file or database to load the data into different kind of
destinations.
We have following sources available in the SSIS level.
1) FLAT FILE SOURCE
2) EXCEL SOURCE
3) RAW FILE SOURCE
4) XML SOURCE
5) ADO.NET SOURCE
6) OLEDB SOURCE

1) FLAT FILE SOURCE: Flat file source is used to load the data from TEXT,
CSV files.

NAVIGATION:

Create one sample file in text document like


CUSTID, CUSTNAME, CUSTBAL
100, A, 45000
200, B, 34000
300, C, 23000
400, D, 46000

Go to Data flow
Drag flat file source
Right click edit
Click on new to create new connection manager for the created file.
Provide the connection manager name which you required.
Browse the file from the location in file name.
Enable the check box column name in the first data row.
Go to column observe the row delimiter and column limiter
Click on ok
Now go to column and observe the output columns properly coming or not.
Click on ok

Drag OLEDB destination


Right click on OLEDB destination
Click on new to create new connection manager
Again click on new
Provide server name
Select the database from drop down list
Click on ok
Click on ok
If you are not create table in the data base level
Then click on new to create a new in SSIS level it self
It will automatically generate a create script
Just change the table name in a script and click on ok
Click on mapping to see the input column and destination columns
Ok

HOW TO HANDLE WITH NEW DELIMITER WHICH IS NOT AVAILABLE IN SSIS

Open the sources file copy the delimiter which are present in source
Go to SSIS level
Go to column delimiter
Paste the copied item
Click on refresh it
Then automatically it will split the column based on the delimiter

HOW TO SKIP HEADER ROWS IN THE FLAT FILE

Go to connection manager
Go to header rows to skip option as you required.

2) EXCEL SOURCE: Excel source component is used to load the data from
excel file into destinations.
Navigation:

Go to data flow
Drag excel source
Right click edit
Click on new
To create new connection for excel
Browse the excel file from the location
Click on ok
Select the sheet name from drop down list
Go to columns tab in the left side
Click on ok
Drag OLEDB destination and configure it

HOW TO SET 64BIT MODE TO OFF

Right click on the Project in the solution explorer


Go to properties
Go to debugging options
Set RUN 64 BIT RUNTIME as false
Click on ok

HOW TO LOAD SELECTED RANGE OF VALUES FROM EXCEL SOURCES

Select excel source component


Click on F4
Go to properties window
Go to open Row Set options
Provide the starting and ending range of excels cell values.
Example: D8:F11

3) RAW FILE SOURCE: We can load the RAW DATA by using RAW source to load
the destination.

HOW TO GENERATE SIMPLE RAW FILE SOURCE

Prepare one flat file and configure with flat file source
Drag RAW file destination
And configure it.

Navigation:
Drag RAW file source
Right click and edit
Browse the file from the location Where it got located
Go to columns
Click on it
Drag OLEDB destination and configure it

4) ADO.NET SOURCE: ADO.NET source is used to load the data from data
bases this component will run based on .NET providers

Navigation:

Drag ADO.NET source


Right click and edit
Provide the database connection for source
Select the source table from drop down list
Click on ok
Click on ok
Drag OLEDB destination and configure it

5) OLEDB SOURCE:

Drag OLEDB source


Right click and edit
Provide the database connection for source
Select the source table from drop down list
Click on column
Click on ok
Drag OLEDB destination and configure it.

6) XML SOURCE: XML source will extract the data from the XML file
into database to load this XML file we need.

.XML

.XSD

.XML: XML is extendable markup language which can contain the source
data inside of it.
.XSD: XSD is XML schema definition file which will store the meta data
of XML file.

HOW TO LOAD XML:

SAMPLE SOURCE: create sample file as following

<ROOT>
<GOO>
<ID>201</ID>
<NAME>SAMBA</NAME>
<SAL>45000</SAL>
</GOO>
<GOO>
<ID>301</ID>
<NAME>SRIKANTH</NAME>
<SAL>55000</SAL>
</GOO>
<GOO>
<ID>401</ID>
<NAME>NARESH</NAME>
<SAL>53000</SAL>
</GOO>
</ROOT>

CLICK SAVE AS

SELECT SAVE AS TYPE ALL TYPES

AND PROVIDE THE NAME WITH .XML

CLICK ON SAVE

THEN AUTOMATICALLY THE ENTIRE DATA WILL BE STORED IN THE FORM OF XML IN THE
PROVIDED FORMAT.

NAVIGATION

Drag XML source


Right click edit
Browse the XML file from the location if we dont have .XSD then we
need to generate .XSD manually.
To generate this we have generate option .XSD button generate it.
IN LINE SCHEMA: if we are getting .XML file and .XSD file in a single
file as .XML then we need to select the check box
Use inline schema.
Click on columns
click on ok.
Drag OLEDB destination and configure it.

CSV FILE: CSV file is COMMA SEPERATED VALUES, which will have the column
delimiter as , always.

To load the CSV file we have to use flat file source and configure it same as
.txt

DESTINATIONS: After extracting the data from different kind of


sources we need to load in to the corresponding destinations
which are required.

1) ADO.NET DESTINATION
2) OLEDB DESTINATION
3) EXCEL DESTINATION
4) RAW FILE DESTINATION
5) FLAT FILE DESTINATION
6) SQL SERVER DESTINATION

OLEDB DESTINATION: OLEDB DESTINATION will load the data into different
kind of databases from the sources generally in real time we will use 90%
destination as OLEDB.

NAVIGATION:

Configure the source (any type of source)


Drag OLEDB destination
Provide the server name as we required
Select the table where you want to load the source data from drop down
list.
Click on mappings
Click on ok.
If we want to create a new table there is a option called NEW
Select it and change the table name as you required
Then click on ok.

FLAT FILE DESTINATIO: We can extract the data from different sources and
load the data in to flat file destination.

It is as same as flat file source configuration and


whenever we need to store the data in the form of flat file then we will use
flat file destination.
NAVIGATION

Configure the source as you required


OLEDB source configure it
Drag flat file destination
Connect with source
Right click and edit
Click on new button to create new connection manager,
Select the delimited option
Click on ok
Provide the connection manager as you required
Browse the file from the location
Enable the check box.
Column rows in the header row
Go to columns
Select the delimiter as you required
Click on ok
Click on mappings.
Click on ok.

RAW FILE DESTINATION: By using this destination we can create the raw file
with the help of source data, there is no execution for this raw file and we
cant create this files manual way.

NAVIGATION

Configure any type of source


Drag raw file destination
Right click edit
Browse the location where you want to create the raw file
Go to columns and select the columns by using check box
Then click on ok.

WRITE OPTION IN RAW FILE DESTINATION

We have following write option to load the data in destination level

Create always
Create once
Append
Truncate and append

CREATE ALWAYS: This option will create always a new file and delete the old
one which is available in the same path.
CREATE ONCE: This option will create only once in the destination path, if
we are trying to create same file one more time then automatically it will
fail.

APPEND: This option will always add the rows in to the existing file with
same name as new records.

TRUNCATE AND APPEND: It will clean the old data which is already present in
the existing file and re-write the new data in to the same file.

EXCEL DESTINATION If we want to load the data in excel then we will use
excel destination.

NAVIGATION

Configure source (excel source)


Drag excel destination
Connect with source
Right click and edit
Click on new to create new connection manager
Click on browse button and select the path where you want to create the
file.
Provide the file name as you required
Click on ok
Click on new to create new excel sheet for the created file then it
will automatically display and create script for the sheet change the
sheet according to procedure
Click on ok
Select the sheet name from drop down list ($)
Click on mapping
Click on ok.

ADO.NET DESTINATION

ADO.NET destination is as same as OLEDB destination

Configure source
Drag ado.net destination
Connect with source
Right click edit
Provide the database connection and table connection
Click on mapping
Click on ok.
SQL DESTINATION; Sql server destination as same as OLEDB and it will
be used for purely bulk data load

NAVIGATION

Drag SQL SERVER DESTINATION


Connect with source
Right click and edit
Provide the database connection and table connection also
Click on mapping
Click on ok.

DIFFERENCES BETWEEN ADO.NET AND OLEDB

ADO.NET

It will be applicable only for sql server database


Ado.net will run based on the .net provider
Data access modes are different and we cant handle through the
variables

OLEDB

All type of databases we can handle using this OLEDB


It will run based on OLEDB providers
We can load the data using different access modes including variables

DATA TYPES EXCEL


SQL SSIS
FLAT FILE
INT/FLOAT DOUBLE PRECISION
SQL SSIS
NVARCHAR UNICODE STRING
VARCHAR STRING

XML FILE
RAW FILE SQL SSIS
SQL SSIS VARCHAR STRING
VARCHAR STRING INT FOUR BYTE INT

ADO.NET
OLEDB
SQL SSIS
SQL SSIS
VARCHAR VARCHAR
VARCHAR VARCHAR
INT INT
INT INT
TRANSFORMATIONS Transformations are used to transform the data
in b/w source and destinations. In SSIS level we have different kind of
transformations as per the requirement.

DATA CONVERSATION: Data conversation is used to convert the data type


of source column as per the destination mapping.

NAVIGATION

Drag excel source and configure it.


Drag data conversion transformation mapping.
Connect with source.
Right click and edit.
Select the required column which we are going to convert the data types
as per the destination.
Select the required data types from drop down list for the selected
column.
Name the output alias column in proper way to map that in destination
level.
Drag OLEDB destination and connect with data conversion and map with
data converted columns.

Example: DIAGRAM

EXCEL FILE SOURCE

DATA
CONVERSION
TRANSFORMATION

OLEDB
DESTINATION
EXCEL FILE SOURCE
EXCEL
LEVEL ID NAME SAL
DOUBLE DOUBLE
DATATYPES PRESICION UNICODE STRING PRESICION

DATA CONVERSION
SSIS LEVEL TRANSFORMATION
D_ID D_NAME D_SAL
DATA
TYPES FOUR BYTE INT STRING FOUR BYTE INT

SQL
SERVER OLEDB DESTINATION
LEVEL
D_ID D_NAME D_SAL
INT VARCHAR INT

COPY COLUMN: We can copy data of the existing column which will act as new
columns the map into the destination level to populate the same data.

NAVIGATION

Drag flat file source and configure it.


Drag copy column conversion transformation
Connect with source and
Right click and edit
Select the required columns
Which you are going to copy by using check box, and provide the
required output name.
Click on ok
Drag OLEDB destination and connect with COPY COLUMN and do the mapping.

Example diagram

FLAT FILE SOURCE CUSTID,CUSTNAME,CUSTSAL

COPY COLUMN
CUSTNO
TRANSFORMATION

OLEDB DESTINATION CUSTID,CUSTNAME,CUSTSAL,CUSTNO

Example:

FOR FLAT FILE SOURCE


COPY COLUMN

CUSTID CUSTNAME CUSTSAL


1 A 1000
2 B 2000
3 C 3000

SQL TABLE

CUSTID CUSTNAME CUSTSAL CUSTNO


1 A 1000 1
2 B 2000 2
3 C 3000 3
CHARACTER MAPPING: It will change the string fields from one case to
other case as we required

Generally we will use this character mapping transformation to change the


case conversion.

NAVIGATION

Drag flat file source and configure it


Drag character map transformation and connect with source
Right click and edit
Select the required column
Which you are going to apply the case conversion
Go to the operation tab select lowercase or uppercase using check box
in the drop down list.
Provide the alias name as we required.
Select the destination option either new column or in place change
New column: it will add one more column in destination mapping.
In place change: it will replace the existing column in add will give
the equal no of columns as same as source
Drag OLEDB destination
Connect with character mapping and map the fields as you required.

Example

FOR FLAT FILE


SOURCE

CUSTID CUSTNAME CUSTSAL


1 a 1000
2 b 2000
3 c 3000

SQL TABLE

CUSTID CUSTNAME CUSTSAL


1 A 1000
2 B 2000
3 C 3000
FLAT FILE SOURCE

CHARACTER MAPPING
TRANSFORMATION

AUDIT: Audit transformation is used to add the


additional columns which are already predefined
like MACHINE NAME, USERNAME, PACKAGENAMEetc. and OLEDB DESTINATION
pass data into destination level.

EXAMPLE FLAT FILE SOURCE

CUSTID CUSTNAME CUSTSAL

1 A 1000

2 B 2000

3 C 3000 AUDIT TRANSFORMATION

SQL TABLE

CUSTID CUSTNAME CUSTSAL USERNAME PACKAGE

1 A 1000 PANDILLA\SANDEEP DATA CONVERSION


PACKAGE2

2 B 2000 PANDILLA\SANDEEP
PACKAGE2
3 C 3000 PANDILLA\SANDEEP PACKAGE2

OLEDB DESTINATION

NAVIGATION:
Prepare one flat file source as we mentioned above.
Drag flat file source and configure it.
Drag audit transformation connect with source right click edit.
Select the required column which you are going to add in destination
level like username, package etc. in audit type
Click on ok
Do the data conversion for the input column to map with destinations
Drag OLEDB source and configure it.

AGGREGATE: It is used to perform aggregate operations

INT---- SUM, MAX, MIN, AVG, COUNT, COUNT DISTINCT.

VARCHAR----GROUP BY, COUNT, COUNT DISTINCT

NAVIGATION:

Drag OLEDB source and configure it


Drag aggregate transformation.
Connect with source right click and edit
Select the required column by using check box and provide the
operations from drop down list as you required.
Drag OLEDB destination
Map the column and configure it.

DERIVED COLUMN: Derived will help us to derive a new column in SSIS level.

We can do different kinds of operations like, AUDIT, COPY COLUMN, DATA


CONVERSION, CHARACTER MAP, STRING OPERATIONS, and DATE FORMATS by using this
transformation.

If we want to do the data conversion by using this


transformation we need to use the following codes for the specified column.

DT_I1 ONE BYTE SIGNED INTEGER (INT)


DT_I2 TWO BYTE SIGNED INTEGER (INT)
DT_I4 FOUR BYTE SIGNED INTEGER (INT)
DT_I8 EIGHT BYTE SIGNED INTEGER (INT)\
DT_R4 FLOAT
DT_R8 DOUBLE PRECISION FLOAT
DT_STR STRING (VARCHAR)
DT_WSTR UNICODE STRING (VARCHAR)
DT_DATE DATE
DT_DBTIMESTAMP DATE TIME

HOW TO ADD NEW COLUMNS IN DERIVED COLUMN BASED ON THE EXISTING COLUMN

We have some table like id, name, Sal


Now we want to derive updatedsal with 5% like(dependent column) and
location bangalour (independent column)

NAVIGATION:

Drag flat file source and configure it.


Drag derived transformation
Right click and edit
Provide the column names for new added column as following.

Click on ok

Drag OLEDB destination and configure it.

SORT: By using this transformation we can arrange the data either


ascending or descending order the advantage in this sort transformation is
we can remove duplicates as well, while sorting data which is coming from
the source.

NAVIGATION

Configure source file


Drag sort transformation
Connect with source
Right click and edit
Select the required column which are going to sort by using left side
check box and provide the required sort type either ascending or
descending order.
If we want to remove duplicates also then you need to enable the check
box it removes the row with duplicates sort values
Click on ok.
Drag OLEDB destination and configure it.

CONDITIONAL SPLIT: we can split the data based on the given condition
in the conditional split transformation send it to the multiple destinations.

If we provide n conditions we will get n+1 outputs

The output is the default output for conditional split.

NAVIGATION

Prepare source file if some sample records having different salaries


Drag flat file source and configure it
Drag conditional split transformation
Right click edit and provide the condition as following.

Provide the default output name which you required the unsatisfied
cases will redirect to default output.
Click on ok
Drag OLEDB destinations and configure it

UNION ALL: union all will combine different types of data sources and load
it in to single destination. We can combine any type of data sources by using
this transformation.

NAVIGATION

Prepare 3 sources flat file, excel and OLEDB source with same column
names and data types.
Drag flat file, excel and OLEDB sources and configure it.
Do the required data conversion for flat file source and excel source
Drag UNION ALL transformation
Connect with source in order OLEDB, EXCEL, FLAT FILE.
Note: the first connected source will store the data type for the
output column and remaining sources have to be can data types flat file
column
Right click and edit
And map the appropriate columns.


Click on ok
Drag OLEDB destination and configure it.

MUTICAST: we can send the same copy of the source to different kind of
destinations at a time

NAVIGATION

Drag flat file source and configure it


Drag multicast transformation connect with source
Drag required destination and appropriate conversations (data type
changes) and configure it, after connecting with multicast.
If we have 10 rows in the source file we will get same no of rows every
destination.

ROW SAMPLING: Row sampling transformation is used to validate the records


which are coming from the source.

If we provide no of rows in the transformation then automatically it will


pick the specified no of rows randomly from the buffer coming from the
source.

NAVIGATION:

Drag flat file source and configure it.


Drag row sampling transformation and connect with source.
Right click and edit
Provide no of rows which you required from source to pick randomly.
Drag OLEDB destination and configure it.

PERCENTAGE SAMPLING: Percentage sampling transformation is used to validate


the records which are coming from the source.
If we provide percentage of rows in the transformation then automatically it
will pick the specified percentage of rows randomly from the buffer coming
from the source.

NAVIGATION:

Drag flat file source and configure it.


Drag percentage sampling transformation and connect with source
Right click and edit
Provide the percentage from rows which you required from source to pick
randomly.
Drag OLEDB destination and configure it.

ROW COUNT: Row count will help us to compare the no of rows following between
source and destination.

We will capture this count into a variable in SSIS level and we can use
anywhere in the package.

HOW TO CREATE A VARIABLE IN SSIS LEVEL

Go to control flow
Right click on empty space and select variables then automatically
variable window will open.
Click on add variable button
Provide the variable name and appropriate data type
NAVIGATION

Create one variable in the package level as rows


Drag flat file source and configure it.
Drag row count and right click edit
If we go to the bottom of the window we can see the properties variable
names.
Select the created variable from drop down list
Click on ok
Drag OLEDB destination and configure it.

HOW TO SEE THE VARIABLE VALUE AT THE EXECUTION TIME

Right click on the data flow task


Click on edit break points
Select pre execute and post execute in the top.
Click on ok
Now execute entire package in the solution explorer.
Go to DEBUG in the menu bar as following.
DEBUG----WINDOW------LOCALS
If we go down window open all the variable
Select user variables and check at the execution time.
MERGE: This transformation will help us to combine the two sorted data inputs
into single destination.

To merge the two sources we need to follow the below conditions.

No of columns must be same.


Data types should be same.
Inputs must be in sorted.
We can combine only two inputs.
NAVIGATION:

Prepare one flat file source and excel same columns and configure it.
Drag sort transformation and do the sorting for excel and flat file
sources.
Drag merge transformation connect it for flat file and excel sources as
input1 and input2.
Right click and edit.
Select the merge input1 and merge input2 column as following.


Drag OLEDB destination and configure it.

MERGE JOIN: we have a use full transformation to do join operations in SSIS


level as same as SQL joins.

Here we can perform all types of joins like INNER LEFT RIGHT AND FULL JOIN

Data types should be same for join key.


Inputs must be sorted.
We can join only two inputs.
NAVIGATION:

Prepare two source files flat file and excel with some matching records
to do the join operations.
Drag flat file and excel sources and configure it.
Drag data conversion and do the required conversion for flat file and
excel.
Drag the sort transformation and do the sorting both flat file and
excel.
Drag merge join transformation connect left and right inputs.
Right click and edit and provide the required join type from drop down
and select the required column for the left and right inputs.
Click on ok.
Note: if we want to do the right outer join we can select left outer
join from drop down list and click on SWAP INPUT button. Then
automatically left table will be right table and right table act as
left table.
Drag OLEDB destination and map the required columns for destination
table.

LOOKUP: lookup transformation is used to split the matching and no matching


records and give it as two outputs.

Here we will check the data in the lookup table by using the base table
based on the key column.

Example: if the existing customer is coming from the source then we have to
redirect to matching output.
If the new customer is coming then we have to redirect as new customer

NAVIGATION:

Prepare one flat file with some sample records, having following
headers.


Drag flat file source and configure it.
Drag lookup transformation right click and edit.
Go to general tab
Select the connection type as OLEDB connection manager


Provide specify how to handle rows with no match entries as


IGNORE FAILURE: if any new record is coming which is not matched with
the destination then automatically package will ignore those new
records.
REDIRECT ROWS TO ERROR OUTPUT: all the records which are not matched
with the destination will redirect to the error output.
REDIRECT ROWS TO NO MATCH OUTPUT: all the records which are not matched
with the destination will come as no match output as a separate output.
FAIL COMPONENT: if any new customer is coming which are not there in
the destination level automatically lookup component will be fail.
Go to connection tab.
Provide OLEDB connection manager for the lookup table
Select lookup table from the drop down list.
Go to columns tab
Add the join column as following.

Click on ok
Drag OLEDB command and OLEDB destination.
Connect with match and no match outputs

Configure OLEDB destination with destination table


Configure OLEDB command as following
Right click on it and edit.
In connection manager tab.
Provide the connection manager for destination table.

Go to component properties
In the go to SQL command option
And write the following query

click on ok
Go to columns mapping
Provide the mapping as following

Click on ok.

SLOWLY CHANGING DIMENSION: Slowly changing dimension will update the changes
for the selected records which will is coming from the source based on the
provided change type.

In SSIS level we have following SCD types

1) SCD type 1
2) SCD type 2

SCD TYPE 1: it will maintain only updated records and it will not maintain
any historical data in table. This as similar as lookup concept.

SCD TYPE 2: it will always maintain historical data for the records based on
the start date and end date in the table.

SCD TYPE 2 NAVIGATION:

Prepare one flat file source with some data and destination with
following structure.

SOURCE STRUCTURE:

CUSTID, CUSTNAME, CUSTBAL

DESTINATION STRUCTURE:

CUSTID, CUSTNAME, CUSTBAL, STARTDATE, ENDDATE

Drag flat file source and configure it


Do the required data conversion.
Drag SCD transformation.
Connect with data conversion and right click and edit then it will show
the welcome wizard.
Click on ok.
Provide the connection manager for destination table.
Select the destination table from drop down list and map the input and
dimension columns as following by providing the business key for join
condition to identify the matching and no matching records.

Click on ok.
Select the dimension columns and required change type for type 2 as
following

Click on next.
Use start date and end date to identify current and expired records.
Select start date and end date from drop down list.
Provide variable to set date values option as

Click on next
Click on next
Click on finish.
SCD TYPE 1 NAVIGATION:

Prepare source and destination with following columns

SOURCE STRUCTURE:

CUSTID, CUSTNAME, CUSTBAL

DESTINATION STRUCTURE:

CUSTID, CUSTNAME, CUSTBAL,

Drag flat file source and configure it.


Do the required data conversion.
Drag slowly changing dimension
Connect with data conversion
Right click and edit then it will show welcome wizard.
Click on next.
Provide the connection manager and table for the destination.
Provide the input and dimension columns with required key column as
following.

Click on next.
Provide the change type for remaining dimension columns as following.

Click on next.
Select the check box for changing attributes
Click on next.
Click on next.
Click on finish.

UNPIVOT: Unpivot transformation is used to convert the columns into rows.

EXAMPLE TABLE:

EXPECTED OUTPUT:

NAVIGATION:
Prepare the source as following
Drag flat file source and configure it.
Drag Unpivot transformation and connect with source.
Right click and edit.
Select the years using left side check box as following.


Provide destination columns and pivot key values as following.


Provide the pivot key value name as YEAR. To get the years list in the
year column and same data in the sale column.
Click on ok.
Drag OLEDB destination and configure it.

PIVOT: pivot is used to transform the rows into multiple columns here no of
columns will be increased and rows will be decreased.

INPUT TABLE:
EXPECTED OUTPUT;

NAVIGATION:

Prepare one flat file source having above data and destination.
Drag pivot transformation and connect with flat file source
Right click and edit.
Go to input column tab select all the column.
Go to input and output properties tab
Expand the input columns.
Set the pivot usage as following for the input column.
Default input columns. Pivot usage
Pid -0
Pname -1
Year -2
Sale -3
Expand pivot default output.
Select on output column.
Click on add column 5 times then automatically it will add 5 columns in
the output column.
Rename those column as following.
Default output column.
Pid,pname,2001,2002,2003.
Output column pivot key value source column
Pid pid linage id of pid(i/p)
Pname pname linage id of pname(i/p)
2001 2001 linage of sale(i/p)
2002 2002 linage of sale(i/p)
2003 2003 linage of sale(i/p)
Drag OLEDB destination and configure it.

IMPORT COLUMN: By using this import column we can upload images and different
types of documents into SQL table.

NAVIGATION:

Prepare one flat file source with the following columns


File path, file name, file type
E:\album\tulip.jpg,tulip,.jpeg
Drag flat file source and configure it.
Drag import column transformation.
Right click and edit.
Click on input column, select file path column by using check box.
Go to input and output properties
Expand import column output.
Click on output columns.
Add click add column then it will add one column to the output.
Renamed as file data.
Observe the linage id of file data and provide that to the source
column file path for the option file data column id which is available
in the custom properties.
Click on ok.
Drag OLEDB destination and configure it.

EXPORT COLUMN: Export column will help us to extract the required files which
are uploaded to the SQL server.

We have a SQL server table with the columns file path, file name,file
type,file data.

Drag OLEDB source and configure it.


Drag export column connect with source.
Right click edit.
Select file data in the extract column option.
Select file path in file path column option.
Click on ok.
Note 1: if we want to change the location of the destination we can update
the table accordingly for file path column.

Note 2: we can change the format as well for the supported images by updating
table.

FUZZY GROUPING: Fuzzy group will group the data which is coming from the
source based on the threshold value which is provided by the user.

For example: we are getting customer locations incomplete and we need to


group the data as clean location based on the percentages of matching cases
which is called as threshold.

NAVIGATION:

Prepare flat file with some sample records as following.


Custid, custname, custlocation
1, A, BANGLORE
2, B, BANGLOR
3, C, BANGLRE
4, D, CHENNAI
5, E, CHENNI
6, F, CHENNA
Drag flat file source and configure it.
Drag fuzzy grouping transformation connect with the source right click
and edit.
Go to columns
Select the location columns by using check box, which you are going to
clean data.
Go to advanced tab
Set the threshold value as per our requirement.
Note: the data will be pulled always based on the provided threshold
value and it will be in the range 0 and 1
Drag OLEDB destination and configure it.

FUZZY LOOKUP: Fuzzy lookup will look the words into lookup table which is
available in the source based on the threshold value and provide the
similarity and confidence values for each and every row.

Flat file source shown below.

Data:

Rose is RED

Sun is hot
Apple is red

Bangalore is cool
board is white

Ice is cool

Prepare lookup table as following.

Lookup values

RED

COOL

NAVIGATION:

Drag flat file source and configure it.


Drag fuzzy lookup transformation
Right click and edit.
In the reference table tab provide the connection manager tab for the
lookup table. And provide the reference table as lookup table from drop
down list
Go to columns and map based on the lookup columns.
Go to advanced tab provide the threshold value as you required.
Click on ok.
Drag OLEDB destination and configure it.

DIFFERENCES BET WEEN FUZZY GROUPING AND FUZZY LOOKUP.

FUZZY GROUPING: fuzzy grouping will group the UN cleaned data which is coming
from the source based on the threshold value.

In fuzzy grouping the no of rows will same as source.

FUZZY LOOKUP: Fuzzy lookup will look the key words in the lookup table based
on the source and provide the list of rows.

In fuzzy lookup the no of rows will be depend on key words available in the
lookup table.

SCRIPT COMPONENT: Script component is a dot net component which is available


in the data flow level and this can be used as source or destination or
transformation, while dragging this data flow level it will ask for option
how you going to use this script component whether it is source or
destination or transformation.
NAVIGATION:

Prepare some sample records in the flat file


Drag flat file source and configure it.
Drag script component connect with source
Right click edit
Provide the required language either VB or C# in the language tab and
click on edit script by using edit script button.
Write the required code using select script and click on ok.
Drag OLEDB destination and configure it.

HOW TO GENERATE ROW NO USING SCRIPT.

IPNPUT TABLE LIKE

Id,name,sal
101,A,10000
102,B,25000
103,C,30000

OUTPUT TABLE LIKE

Rowno,id,name,sal
1,101,A,10000
2,102,B,25000
3,103,C,30000

Take the source with 3 columns id,name,sal


Drag script component connect with source
Right click edit select the script language as VB
Go to input columns
Select all the input columns and check what the columns present are.
Expand output and add one more new column as row no.
Go to script and write the script in the submission.
Example dim counter as integer =0
Counter= counter+1
Row.rowno=counter
Note: counter variable have to be declare in the main clause.

TERMLOOKUP: Term lookup is the one of the powerful transformation in the data
flow level, this transformation is basically help to analyze the set of text
records which are coming from the source.
This will look for the noun or noun phrase which are
available in the source based on the provided keyword in the lookup table.

NAVIGATION: source table like folloeing customer table

Srno, userid, feedback

1, 100, hard touch screen

2, 101, smooth side

3, 102, getting problem while initializing applications

4, 103, good looking screen

5, 104, touch screen not working properly

6, 105, phone is not looking good

7, 106, operating system is very slow

LOOKUP TABLE LIKE FOLLOWING

Sno, keywords

1, touch screen

2, screen

3, application

4, phone

Drag the source as above as customer feedback


Create one table for lookup with the above data as search key
Drag flat file source and configure it.
Do the required data conversions for feedback column as Unicode string
Right click and edit
Provide the connection manager for the lookup table form drop down
list.
Go to term lookup tab
Provide the link for feedback and keyword and select the check box for
feedback column.
Drag OLEDB destination and configure it.

TERM EXTRACTION: Term extraction is used to provide the occurrence of the


noun or noun phrase or noun and noun phrase.
It will check for the repeated values for this selected type and provide the
score for that.

NAVIGATION:

Prepare the some sentences in the flat file source which are having
nouns and noun phrase.
Drag flat file source and configure it.
Drag data conversion and convert the column to Unicode string.
Drag term extraction right click and edit.
Select the column which you have nouns and noun phrase by using check
boxes.
Go to advanced tab
Select the term type using radio button whether it is noun or noun
phrase or noun and noun phrase
Click on ok.
Drag OLEDB destination and configure it.
Once this got executed it will provide the selected term types and
score for that.

DATA FLOW END


CONTROL FOLW
The execution flow always starts from the control flow then after it will go
for the data flow level.

Control flow is a combination of different types of containers as


well as tasks in the package execution.

Following are the important containers and tasks available in the control
flow level.

CONTAINERS

For loop container


For each loop container
Sequence container

TASKS

Bulk insert task


Data flow task
Execute sql task
Execute package task
Execute process task
FTP task
File system task
Send mail task
Script task
Data profiling task
FOR LOOP CONTAINER: This container will loop the item which are there in the
container no of times based on the init, eval, assign.

INIT EXPRESSION: Init expression will initialize the value for the loop, so
the loop always starts from initial value.

Example @A=1

Here A is a variable

EVAL EXPRESSION: Eval expression is evaluate expression which will check the
condition and execute for loop until the condition false.

Example @A<=@B

Here B is a variable having value is 15

ASSIGN EXPRESSION: Assign expression will increase or decrease the value for
every execution and gives increment value to the condition

Example @A = @A+1

NAVIGATION:

Configure one data flow with one flat file having some sample records
Drag for loop container
Create 2 variable with the scope of package as following

Drag data flow task inside of container.


Right click on the for loop container click on edit and provide the
expression as following

SEQUENCE CONTAINER: Sequence container is a rectangular box, which will


execute the item available in the container level either in parallel way or
sequential way.

If we want to change the properties of the tasks for all in the container no
need to change each and every task, just we can change container property
then it will applicable for all the tasks.

BULK INSERT TASK: Bulk insert task will load the data very fast using the
fast load options from flat file source to the SQL SERVER destination.

Our source should be flat file and destination should be SQL SERVER.

All the data types of the destinations table should be varchar.

No constraint check in the destination level.

We cant transform any data in between source and destination level.

NAVIGATION:

Prepare one flat file source with some sample records.


Create the table with the same columns in SQL level with all data types as
varchar.
Go to SSIS level.
Drag bulk insert task.
Right click and edit.
Go to connections.
Provide the source file connection in the source connection option by
browsing it from the location.
Provide the delimiter for the flat file by selecting from the drop down
list.
Provide the destination database connection.
Provide the table from the drop down list.
Go to options tab in the left side provide the last row and first options
as you required.

DATA PROFILING TASK: Data profiling task will validate the data which is
coming from the source based on the provided profile request in the table.

We will have different kind of profile request like column length, null
ratio, statistics etc. to validate source data.

NAVIGATION:

Prepare one table with some null values in a table in specific column.
Drag profiling task.
Right click edit.
In general tab options provide the destination types as file connection
choose destination path and format by browsing in destination.
Provide the override destination option as true.
Go to profile request tab.
Select the profile type as we required.
For example column null ratio profile request.
Click on request id, then it automatically generates.
Select it in go to the request properties.
Connection manger: database file source connection
Table name : source table name
Column: request column name for null ratio.
Click on ok.

SEND MAIL TASK: send mail task will help us to send the email notification
for starting of the package and success of the package and failure as well.

NAVIGATION:

SMTP CONNECTION: SMTP connection is the gateways email to the one domain
to other domain to configure this. Select the new SMTP connection drop
down list and provide the SMTP server name default port no is 25.
FROM TO CC BCC
Provide the corresponding email address for the above tabs.
MESSAGE SOURCE TYPE: we have 3 type of message source as following.
DIRECT INPUT: directly we can provide the message source in the column.
FILE CONNECTION: create one file and provide the subject which you want
inside of it file and select the file from location.
VARIABLE: select the priority as we required like LOW, HIGH, NORMAL
ATTACHMENT: if we have any attachments browse it for mail.
SUJECT: we can hard code the subject message in the subject tab.
If we want to change the subject from server to server then we need to
handle this in dynamic way.
Go to express tab
Click on right side corner symbol
Select the properties as subject.
Click on expression and write the following expression for this.
the package got started on + @[system::Machine Name] +
at the time of + (DT_STR,50,1252)substring((DT_STR,50,1252)get
date(),1,19)
Message source: expression shown below.
The package got completed successfully server + @ [system::Machine Name]
+ \n\n Regards, \n support team.

EXECUTE PROCESS TASK: Execute process task is used to call the applications
in the SSIS level by using .EXE, .BAT file etc.

To call the applications first we need to provide the executable file where
it was installed in the load machine in the executable tab.

Generally we will use this task for extracting the ZIP or RAR archive in the
file using WinZip or WinRAR software.

NAVIGATION:

Create two folders named as ZIP source, destination.


Create one RAR file in the ZIP source the target is we need to extract
those file into destination folder.
Drag execute process task.
Right click and edit.
Provide the WinRAR file for the executable for as following and provide
the arguments directory as following.

For UNRAR

HOW TO UNZIP THE ZIP FILE FROM SOURCE TO DESTINATION.


Executable: c:\programfiles\unzip\winzip32.exe
Arguments: -min e D:\zipsource\src.zip D:\destination\
HOW TO ZIP THE SOURCE FILE FROM SOURCE TO DESTINATION
Executable: c:\programfiles\unzip\winzip32.exe
Arguments: -min a D:\zipsource\src.zip D:\destination\

EXECUTE PACKAGE TASK: Execute package task will help us to call the SSIS
package as with in a package.

NAVIGATION:

Create two packages flat file.dtsx, and execl.dtsx


Create one more package and renamed as master package.dtsx
And drag two execute package tasks into master package.dtsx and configure
those flatfile and excel by using the following procedures.
Right click on execute package task and edit.
Provide the location type as file system and browse the file location as
we required.
Note1: we can call only one package for one task.
Mote2: if we have any password for package we need to provide that in the
password tab.

Advanced features in 2012:

We have two types of references in 2012


PROJECT reference: no need to go to the physical location of the package,
the list of packages available in under current project will be displayed
under project reference, so directly we can select it.
EXTERNAL Reference: external reference as same as 2008 having two types.
FILE SYSTEM: we will browse the package from the physical location.
SQL SERVER: the stored package which are available under package location
(MSDB DATABASE) which are available in SQL SERVER.
FTP TASK: FTP task is the file transfer protocol task, when we are sending or
receiving the files in between the remote path and local then we need to
specify the following details to establish the connection for the network.

NAVIGATION:

Drag FTP task right click and edit.


Go to FTP connection option
Take the new connection in the list.
Then it will ask for the details which are provided by the client as
following.

Go to file transfer tab


Provide the operation either send or receive files.
Provide the remote parameter where you want to receive or send files
(drive path)
Provide the local parameter as well where you want to receive or send file
(local path)
If we want to handle these paths we have to create separate variable for
this and provide the path in it. Set the parameter for in source path
variable is remote variable, is local path variable.
Browse the variable for the drop down list.

FILE SYSTEM TASK: file system task is used to do the operations like, copy
file, delete file, move file, and folder operations like delete directory,
create directory.. etc.

NAVIGATION:

Drag file system task create two folders named as source and destination
Create two variables in SSIS level as source path and destination path.
Right click the task and click on edit.
Provide the operation which you required.
Set the property as source path and destination path is true. And provide
variables as source variable and destination variable for list.
Click on ok.
Execute for task
We have following operations for file system task.
Copy file
Move file
Delete file
Delete directory
Create directory
Delete directory contents
Move directory

SCRIPT TASK: script task is same as script component here also we can write
the code C#, VB as per our requirement.

We can do any kind operations by using script task in the control flow level.

In script task we have following types of operations to do the operations.

SCRIPT language: script language is which are going to write in the script
task either VB or C#.

READ ONLY VARIABLE: The variable which are select under read only variable
will provide the variable as input to do the operations in the script. We
cant override the output value in these variables after dong the operation.

READ WRITE VARIABLES: The variable which are select under read only variable
will provide the variable as input to do the operations in the script. We can
override the output value in these variables after dong the operation.

NAVIGATION:

Create one variable named as filepath and provide the location as


following.

Drag script task right click and edit.


Select language as VB and provide entry point as main
Select the filepath variable in user::filepath
Click on ok.
Example as following
Dim fileloc as string = dts.varibale (user::filepath).value

Msgbox(fileloc + + datetime.now)

Dts.taskresult =scriptresult.sucess

EXECUTE SQL TASK: execute SQL task will help us to do the SQL level operation
like calling procedures, executing SQL statements, storing results of SQL
statements in to variables etc.

This is only the component available in the control flow to do the SQL level
operations.

NAVIGATION:

Drag execute SQL task right click edit.


Go to general tab
Provide the data base connection and connection type in SQL statements.
Connection type :OLEDB
We have different types of connections available in the connection type we
need to choose the required connection type from the drop down list.
Connection: provide the data base connection as we required for the
database.
SQL source type:
Direct input: we will provide the SQL query in the task.
File connection: provide the entire query which you want to execute in one
text document and select it from the location.
Variables: create one variable with string data type and paste the
required query in it. And select that variable from drop down list.
SQL statement: it will direct input the task.

HOW TO RUN THE SQL STATEMENTS IN EXECUTE SQL TASK

Provide all the connections and provide and paste the query which you
required in the task on following.

Insert into empdata


Select 1,A, null union all
Select 2,B, null union all
Select 3,C, null union all
Select 4,D, null
HOW TO EXECUTE PROCEDURES HAVING NO PARAMETERS

Create one procedure with required logic and assume name as usp_emp

Drag one execute sql task.

Provide proper DB conncetions in sql statements option provide the following


query.

Exec usp_emp

Parameter mapping: parameter mapping concept is the values which are present
in the variable will be passed to sql statements.

Example: prepare one query as following and paste in the task.

Insert into empdata values (100, suresh,?,?,?)

Table as empid, empname, packagename, machinename, location

HOW TO PASS THE PARAMETER VALUES FOR PROCEDURES

Create one procedure as following

Create procedure sp_update(@desg varchar(50))

As

Begin

Update empdata set sal=sal+(sal*0.25) where desg=@desg

End

Go to SSIS

Drag execute sql task

Right click edit provide the DB connection

Write the following sql statement

Exec sp_update ?

Create one variable as following


Go to parameter mapping and map as following

Click on ok

Conclusion: while executing the task the values which we provide in the
variable will be passed to procedure as input.

RESULT SET: the concept of the result set is storing the sql statement
results in to a user variable created in the SSIS level.

We have three types of results sets

Single row:

Prepare one sql statement as following


Select count(*) from empdata
Edit the execute sql task
Provide the result set as single row.
Provide the sql statement as
Select count= count(*) from emodata
Create one variable as table count with datatype as int and go to result
set tab click on add button and provide the result set as following

To see this variable value at the time of execution time set the break
point pre and post execution.
FULL RESULT SET: it will store the complete result the which is coming
from the table in the form of rows and columns.
Provide the sql query as following
Select * from empdata
Provide it in the task
Provide the result set as full result set
Create one variable as following

Go to result set tab


Click on add and map as following

Click on ok
XML: we can store the xml string into one user variable in SSIS level
Prepare the query as following
Select * from empdata for xml auto
Drag one execute sql task
Set result set as xml
Paste the above query
Create the variable with string datatype
Go to result set tab and add one variable and results name as 0 and
variable as user::xmaldata
Click on ok
Add break points to see the output at the execution time.

FOREACH LOOP CONTAINER: For each loop container is used to iterate the loops
based on the type of enumerator.

We have different types of enumerators.

1) For each file enumerator


2) For each ado.net schema rowset enumerator
3) For each ADO enumerator

FOR EACH FILE ENUMARATOR:


File each file enumerator will iterates the loops based on the no of
files available in the provided path.
To load the files through the for each loop we need to have same
structure for all the files and same extensions as well.

NAVIGATION:

Create one folder named as source folder and prepare sample files as
src1.txt, src2.txt, src3.txt.
All the files should be in the same format with same extension
Create 3 variables as following.

Drag for each loop container


Right click and edit
Go to collection tab
Select the enumerator type as for each file enumerator.
Expand the expressions below the enumerator.
Click on expression box.
Select the property expression as following

Click on ok.
Go to variable mappings map as following.

Click on ok.
We have 3 options in the collections tab to store the filename in the
variable.
1) Fully qualified: it will store entire path and filename in the
variable.
2) Name and extension: it will store only the file name and extension in
the variable.
3) Name only: it will store only the file name in variable.
Drag data flow task configure the flat file connection with any one of the
file located in the location.
Drag OLEDB destination and configure it.
In the connection manager select the flat file connection click on F4.
Go to expression in the property and provide the connection string
property as following.

FOR EACH LOOP ADO.NET SCHEMA ROWSET ENUMARATOR:

HOW TO LOAD MULTIPLE EXCEL FILES WITH MULTIPLE EXCEL SHEETS;

Prepare two excel files and multiple sheets


Drag for each loop container
Create 4 variable as following.

Right click on the for each loop container.
Go to collection tab.
Provide the enumerator as for each file enumerator.
Configure the properties.
Directory and filespec as following.

Go to variable mapping provide the variable filename and value index 0.

Click on ok.
Drag for each loop container inside of for each loop container.
Right click and edit.
Go to collection tab.
Provide enumerator as for each ADO.NET schema rowset enumerator.
Select the new connection in the connection
Click on new.
Select provider type as Microsoft office 12.0 access database engine OLEDB
provider.

Click on ok.
Provide any one file name.
Server or filename: E:\source\src1.xlsx

Go to all.
Go to advanced options.
Provide the extended property as following

Click on text connection to test the connection was established properly


or not.
Click on ok.
Select schema as tables as shown below picture.

Go to variable mapping map the sheet name as following.

Click on ok.
Drag one dataflow task inside of it.
Drag one excel source.
Provide the connection through the any one of the file.
Select the data access mode table name or view name variable.
Provide the variable name as following.

Click on ok.
Drag OLEDB destination
Configure the destination table.
Click on ok.

FOR EACH ADO ENUMARATOR:

Create one table with different products as like Unpivot source.


Drag one execute sql task.
Right click and edit.
Select result set as FULL RESULT SET.
Provide the result set as following.

Here user::prodlist variable is object variable.


Click on ok.
Drag for each loop container.
Right click and edit.
Go to collection tab
Provide the enumerator type as for each ADO enumerator.
And provide ADO object source variable as user::prodlist.
Go to variable mappings

Click on ok.
Drag one dataflow task inside of the for each loop container.
Drag OLEDB source right click and edit.
Provide the database connections.
Select the data access mode as SQL command.
Select SQL command as following
Select * from unpvt where pname=?
Click on parameter in the right side and map as following.

Click on ok.
Click on columns
Click on ok.
Drag flat file destination and configure it.
Create on variable destination and provide the destination path in it.
Go to connection string option in the flat file connection.
Provide the connection string expression as following.

No of destinations will be created based on the products available in the


table.

SPECIAL PROPERTIES
Logging
Check points
Precedence constraints
Configurations
Deployment
Scheduling

LOGGING: Logging is used to capture the information based on the selected


event while executing the package.

Generally we will configure these loggings do the error handlings using on


error events, we have different types of log providers in SSIS level.

1) SSIS log provider for text files


2) SSIS log provider for xml files
3) SSIS log provider for sql server
4) SSIS log provider sql server profiler
5) SSIS log provider the windows event log

SSIS log provider for text files:

Right click on the control flow


Click on loggings then log window will be open
Select the entire package using left side check box. (whenever package got
failed log will be created)
Select provider type as SSIS log provider for text files
Go to details
Then one row will be added select that using check boxes.
Provide the location in the configuration option where you want to create
the log file.
Go to details
Select the on error using check box
Click on ok.

SSIS log provider for sql server:

Select the provider type as SSIS log provider for sql server
Click on add
Provider configuration for database connection
Default log name as
Sysssislog ----------in 2008,2012
DTS90 ---------------in 2005
Once we executed the package log table will be created under systables
with the following names.

SSIS log provider for xml files:

Configuration same as text file but file extension as .xml


SSIS log provider sql server profiler:

Configuration same as text file but file extension as .trc

CHECK POINTS: Check points are used to restart the package at the point of
failure

If we will identify the failure point of the package and create one check
point file in the specified location by using this failure point if identify
and restart from that.

HOW TO CONFIGURE:

Take 4 dataflow tasks as following


Configure all the dataflow tasks and forcefully fail any one of task to
restart the check point functionality.
Right click on each and every dataflow task
Click on property
Set the following options
Fail package on failure :true
Fail parent on failure :true
Right click on control flow level go to properties
Provide the following options as you required.
Check point file name: provide the location for the check point file where
you want to create it.
Check point usage: if the check point file is available in the location,
it will use the check point file and start from the failure point. If
check point file not available then it will restart from the starting of
the package.
Always: it will look for the check point in this specified location if the
file is not available in it then the package will not start.
Never: it will never use the check point file which is available in the
location and it always start from the starting of the package.
Save check point: set this option to true to store the check point
location. If this option set to false check point file wont store in the
location.
Dis advantages:
Check point will be applicable in the task level only but not in the
record level.
Whenever you use the parent package you cant implement check point for
child package.
Note: once your package got executed successfully after rectifying errors
it automatically delete the check point from the location.
PRECEDENCE CONSTRAINT: precedence constraints are used to set the dependence
between the tasks based on the type of expression or constraint we have 4
types of evaluation options as following.

Constraint
Expression
Expression and constraint
Expression or constraint

1) CONSTRAINTS: This is default evaluation operation and we have 3 types


of constraints types as following.
a. Success
b. Failure
c. Completion
Success: By default constraints type is success this will be connected
with the below task with green arrow. If the above task got executed
successfully then only below task got executed.
Failure: This will be connected with the red colure, when the above task
got failed then only below task will be executed.
Completion: This will be connected with the blue colure when the above
task got fail or success then below task will execute.

2) EXPRESSION: It will evaluated the expression which we mention in the


expression option if this expression satisfy then only below task will
be executed.

Example: expression: @cnt==11

In the above example cnt is the variable which we declared in the SSIS
level and 11 is the value which is stored in the variable.

If this value is not equal to 11 in the variable then below task will not
be executed.

3) EXPRESSION AND CONSTRAINT: If below task have to be executed then it


should satisfy both expression and constraint in the task level.
4) EXPRESSION OR CONSTRAINT: If below task have to be executed then it
should satisfy any one of expression or constraint in the task level.
CONFIGURATIONS: Configurations are used to handle the package properties
dynamically through the file or table, when we are moving the package from
one environment to other environment.

1) XML configuration file


2) Environment variable
3) Registry entry
4) Parent package variable
5) SQL server

XML configuration file: The required package properties will be stored in the
XML configuration which will help us to handle the package properties
dynamically through the configuration file.

NAVIGATION:

Create one package to load one flat file.


Create the variable which you required like source path, archive path etc.
Right click on control flow area
Select package configurations
Then automatically configurations window will be open
Enable the configuration by using check box.
Click on add button to add the configuration
Click on next.
Select the configuration type as XML configuration
Browse the location where you want to create the xml file.
Click on next.
Select the required properties which you are going to handle dynamically
by using check boxes.
Click on next.
Provide the configuration name.
Click on finish.

SQL server configuration: In SQL server configuration we will store the


properties in SQL table.

If we change the values inside of the table then those will be passed to the
package dynamically through the table.

HOW TO CONFIGURE:

Right click on control flow area click on package configuration


Click on next
Select the configuration type as SQL server from drop down list.
Select database
Click on new button to create a new table for configuration as following
Provide the configuration filter name as you required.
Click on next.
Select the properties which you required by using check boxes as same as
xml configuration.
Click on next.
Provide configuration name.
Click on finish.

PARENT PACKAGE VARIABLE: We want to pass a value from one package to the
other package which are available in the variable then we will use parent
package variable.

HOW TO CONFIGURE:

Create 2 separate packages master and child.


And create 2 variables in master and child package in location
Provide any value in the master package and leave as blank in the child
package.
Now call the child package in the master package by using execute package
task.
The main concept is to pass the variable value from master to child
package.
To pass the value we have to configure parent package configuration for
both the packages
How to enable parent package configuration.\
Go to child package go to control flow area right click and select the
package configuration enable the package configuration check box.
Click on add
Select the configuration type as parent package variable.\
Specify the parent variable which you want the value.
Click on next.
Expand the variable.\
Expand the location variables
Click on value.
Click on next.
Click on finish.\
Do the same for master package also for location variable.
If we run master package then automatically the values which are available
in the master package will pass in to the child package.

DEPLOYMENT: Deployment is used to move the package from one area to other
area to using the deployment.
We have two types of deployment in SSIS level.

1) File system deployment


2) SQL server deployment

To do the deployment first we have to generate the manifest file by using


that we will do the deployment.

HOW TO GENERATE THE DEPLOYMENT FILE:

Right click on project


Click on properties
Go to deployment utility in the left side
Set the deploy utility as true.
Click on ok
Right click on project
Click on rebuild.
Then automatically deployment folder will be created under BIN area in
project location.

FILE SYSTEM DEPLOYMENT: File system deployment is nothing but moving the
package one drive to another drive it is just copy paste option.

NAVIGATION:

Go to project location
Go to the bin area
Open the deployment folder
Double click on the deployment manifest file
Click on next
Then it will ask for the type of deployment
Select file system deployment by using radio button.
Click on next
Browse the location where you want to move package in the drives.
Click on next
Click on finish.

SQL SERVER DEPLOYMENT: SQL server deployment is moved in the packages from
drive to MSDB folder which is available under stored packages.

NAVIGATION:

Connect to the integration services then it will show stored packages and
running package folders
Expand stored packages
Right click on MSDB folder
And create one folder by using new folder options
Running packages
Stored packages
File system
MSDB
Data collector
Maintenance plans
My packages(user folders)
Package1
Package2

SCHEDULING: Scheduling will run the deployed packages on the specified


interval of time to run the packages automatically, first we have to create a
job for that then after we will scheduled that job.

HOW TO CREATE A JOB:

Connect to the DB engine.


Start the SQL server agent if not started (right click start).
Expand SQL server agent
Right click on jobs.
Click on new job.
In general tab provide the job name as we required.
Click on new to create a new step
Provide the step name.
Select integration services package name in type.
Provide the server name.
Browse the package from the location.
Click on ok.
Then automatically it will create a job.

HOW TO RUN A JOB WITHOUT SCHEDULING (Manual run)

Right click on the job.


Click on start job at step.

HOW TO SCHEDULE A JOB TO RUN AUTOMATICALLY

Go to job properties.
Go to schedule tab.
Click on new to create new schedule.
Provide schedule name as we required.
Select the frequency occurs either weekly or monthly.
Specify the time in the daily frequency at what we have to run.

EVENT HANDLERS: Event handlers will rise an event based on the provided event
type in the package level or task level.

HOW TO CONFIGURE EVENT HANDLERS:

Go to event handlers tab.


Select the type of event from drop down list.
Select executable either package level or task level.
Click on add (blue color line in the event handler page).
And configure the task as we required.
Example: while loading the data from a flat file, we have to truncate PRE-
EXECUTION.

SCENARIOS IN SSIS LEVEL

Check the source file is exist or not.


Multiple flat file using single connection manager to load.
Multiple flat file using for each loop.
Multiple excel files using for each loop.
Multiple sheet loading using for each loop.
Unzip source load file zip using execute process task.
Moving the loaded file to archive folder.
Handling parameter and result set using execute SQL task.
Calling sub package in the master package.
How to pass variables from one package to another package.
How to handle the properties dynamically.
How to capture the error information package.
How to move the package form one server to another server.
How to run the package automatically at given time.
How to pass the parameter in source connection.
How to get the files from server to the load.
How to use variables in source connection.
How to provide security to package.

Vous aimerez peut-être aussi