Académique Documents
Professionnel Documents
Culture Documents
problems
Classic
Flipcard
Magazine
Mosaic
Sidebar
Snapshot
Timeslide
1.
DEC
Unique key display the unique values which can have one null value where
Primary key has unique values without null.
Whenever you create the primary key constraints, Oracle by default create a unique index with not
null.
Add a comment
2.
3.
JUN
15
Step 1:
Create two text files in the $PMSourceFileDir directory with some sql queries.
1. sql_script.txt
File contains the below Sql queries (you can have multiple sql queries in file separated by semicolon)
create table create_emp_table
(emp_id number,emp_name varchar2(100))
2.
sql_script2.txt
File contains the below Sql queries (you can have multiple sql queries in file separated by semicolon)
at
Similarly create a target definition, go to target designer and create a target flat file with result and
error ports. This is shown in the below image
Step 4:
Go to the mapping designer and create a new mapping.
Drag the flat file into the mapping designer.
Go to the Transformation in the toolbar, Create, select the SQL transformation, enter a name and click
on create.
Now select the SQL transformation options as script mode and DB type as Oracle and click ok.
Fire a select query on the database to check whether table is created or not.
=============================================================================
Add a comment
4.
JUN
The optimizer decides it would be more efficient not to use the index. If your query
is returning the majority of the data in a table, then a full table scan is probably
going to be the most efficient way to access the table.
You perform a function on the indexed column i.e. WHERE UPPER(name) = 'JONES'.
The solution to this is to use a Function-Based Index.
You perform mathematical operations on the indexed column i.e. WHERE salary + 1
= 10001
You concatenate a column i.e. WHERE firstname || ' ' || lastname = 'JOHN JONES'
You do not include the first column of a concatenated index in the WHERE clause of
your statement. For the index to be used in a partial match, the first column
(leading-edge) must be used. Index Skip Scanning in Oracle 9i and above allow
indexes to be used even when the leading edge is not referenced.
The use of 'OR' statements confuses the Cost Based Optimizer (CBO). It will rarely
choose to use an index on column referenced using an OR statement. It will even
ignore optimizer hints in this situation. The only way of guaranteeing the use of
indexes in these situations is to use an INDEX hint.
EXISTS vs. IN
The EXISTS function searches for the presence of a single row meeting the stated
criteria as opposed to the IN statement which looks for all occurrences.
TABLE1 - 1000 rows
TABLE2 - 1000 rows
(A)
SELECT t1.id
FROM table1 t1
WHERE t1.code IN (SELECT t2.code
FROM table2 t2);
(B)
SELECT t1.id
FROM table1 t1
WHERE EXISTS (SELECT '1'
FROM table2 t2
WHERE t2.code = t1.code)
For query A, all rows in TABLE2 will be read for every row in TABLE1. The effect will
be 1,000,000 rows read from items. In the case of query B, a maximum of 1 row
from TABLE2 will be read for each row of TABLE1, thus reducing the processing
overhead
of
the
statement.
Rule
of
thumb:
If the majority of the filtering criteria are in the subquery then the IN variation may
be more performant.
If the majority of the filtering criteria are in the top query then the EXISTS variation
may be more performant.
I would suggest they you should try both variants and see which works the best.
Note. In later versions of Oracle there is little difference between EXISTS and IN
operations.
Presence Checking
The first question you should ask yourself is, "Do I need to check for the presence
of
a
record?"
Alternatives
to
presence
checking
include:
Use the MERGE statement if you are not sure if data is already present.
Perform an insert and trap failure because a row is already present using
theDUP_VAL_ON_INDEX exception handler.
Perform an update and test for no rows updated using SQL%ROWCOUNT.
If none of these options are right for you and processing is conditional on the
presence of certain records in a table, you may decide to code something like the
following.
SELECT Count(*)
INTO v_count
FROM items
WHERE item_size = 'SMALL';
IF v_count = 0 THEN
-- Do processing related to no small items present
END IF;
If there are many small items, time and processing will be lost retrieving multiple
records which are not needed. This would be better written like one of the
following.
SELECT COUNT(*)
INTO v_count
FROM items
WHERE item_size = 'SMALL'
AND rownum = 1;
IF v_count = 0 THEN
-- Do processing related to no small items present
END IF;
OR
SELECT COUNT(*)
INTO v_count
FROM dual
WHERE EXISTS (SELECT 1
FROM items
WHERE item_size = 'SMALL');
IF v_count = 0 THEN
-- Do processing related to no small items present
END IF;
In these examples only single a record is retrieved in the presence/absence check.
Inequalities
If a query uses inequalities (item_no > 100) the optimizer must estimate the
number of rows returned before it can decide the best way to retrieve the data.
This estimation is prone to errors. If you are aware of the data and it's distribution
you can use optimizer hints to encourage or discourage full table scans to improve
performance.
If an index is being used for a range scan on the column in question, the
performance can be improved by substituting >= for >. In this case, item_no >
100 becomes item_no >= 101. In the first case, a full scan of the index will occur.
In the second case, Oracle jumps straight to the first index entry with an item_no
of 101 and range scans from this point. For large indexes this may significantly
reduce the number of blocks read.
If we now want to limit the rows brought back from the "D" table we may write the
following.
FROM d, c, b, a
WHERE a.join_column = 12345
AND a.join_column = b.join_column
AND b.join_column = c.join_column
AND c.join_column = d.join_column
AND d.name = 'JONES';
Depending on the number of rows and the presence of indexes, Oracle my now pick
"D" as the driving table. Since "D" now has two limiting factors (join_column and
name), it may be a better candidate as a driving table so the statement may be
better written as follows.
FROM c, b, a, d
WHERE d.name = 'JONES'
AND d.join_column = 12345
AND d.join_column = a.join_column
AND a.join_column = b.join_column
AND b.join_column = c.join_column
This grouping of limiting factors will guide the optimizer more efficiently making
table "D" return relatively few rows, and so make it a more efficient driving table.
Remember, the order of the items in both the FROM and WHERE clause will not
force the optimizer to pick a specific table as a driving table, but it may influence
it's decision. The grouping of limiting conditions onto a single table will reduce the
number of rows returned from that table, and will therefore make it a stronger
candidate for becoming the driving table.
Caching Tables
Queries will execute much faster if the data they reference is already cached. For
small frequently used tables performance may be improved by caching tables.
Normally, when full table scans occur, the cached data is placed on the Least
Recently Used (LRU) end of the buffer cache. This means that it is the first data to
be paged out when more buffer space is required. If the table is cached (ALTER
TABLE employees CACHE;) the data is placed on the Most Recently Used (MRU) end
of the buffer, and so is less likely to be paged out before it is re-queried. Caching
tables may alter the CBO's path through the data and should not be used without
careful consideration.
Improving Parse Speed
Execution plans for SELECT statements are cached by the server, but unless the
exact same statement is repeated the stored execution plan details will not be
reused. Even differing spaces in the statement will cause this lookup to fail. Use of
bind variables allows you to repeatedly use the same statements whilst changing
the WHERE clause criteria. Assuming the statement does not have a cached
execution plan it must be parsed before execution. The parse phase for statements
can be decreased by efficient use of aliasing. If an alias is not present, the engine
must resolve which tables own the specified columns. The following is an example.
Bad Statement
Good Statement
SELECT first_name,
last_name,
country
FROM employee,
countries
WHERE country_id = id
AND lastname =
'HALL';
SELECT e.first_name,
e.last_name,
c.country
FROM employee e,
countries c
WHERE e.country_id =
c.id
AND e.last_name =
'HALL';
Add a comment
5.
JUN
3
4
THREE
THREE
FOUR
THREE
4 rows selected.
SQL> SELECT id, NVL2(col1, col2, col3) AS output FROM null_test_tab ORDER BY id;
ID OUTPUT
---------- ---------1 TWO
2 THREE
3 THREE
4 THREE
4 rows selected.
SQL>
COALESCE
The COALESCE function was introduced in Oracle 9i. It accepts two or more parameters and returns the
first non-null value in a list. If all parameters contain null values, it returns null.
SQL> SELECT id, COALESCE(col1, col2, col3) AS output FROM null_test_tab ORDER BY id;
ID OUTPUT
---------- ---------1 ONE
2 TWO
3 THREE
4 THREE
4 rows selected.
SQL>
Add a comment
6.
7.
FEB
Load the session statistics such as Session Start & End Time,
Success Rows, Failed Rows and Rejected Rows etc. into a database
table for audit/log purpose.
Scenario:
Load the session statistics such as Session Start & End Time, Success Rows, Failed Rows and
Rejected Rows etc. into a database table for audit/log purpose.
Solution:
After performing the below solution steps your end workflow will look as follows:
START => SESSION1 => ASSIGNMENT TASK => SESSION2
SOLUTION STEPS
SESSION1
This session is used to achieve your actual business logic. Meaning this session will perform your
actual data load. It can be anything File Table.File or TableTable, File
WORKFLOW
VARIABLES
Create
the
following
workflow
=>
=>
=>
=>
=>
variables.
$$Workflowname
$$SessionStartTime
$$SessionEndTime
$$TargetSuccessrows
$$TargetFailedRows
ASSIGNMENT
Use
the
Expression
TASK
tab
$$workflowname
$$sessionStartTime
$$SessionEndTime
$$
TargetSuccessrows
$$
TargetFailedRows
in
the
Assignment
=
=
=
=
=
Task
and
$
$
$
$
SESSION1.
SESSION1.
assign
as
follows:
$PMWorkflowName
SESSION1.StartTime
SESSION1.Endtime
TgtSuccessRows
TgtFailedRows
SESSION2
This session is used to load the session statistics into a database table.
=>
This
should
call
a
mapping
say
m_sessionLog
=> This mapping m_sessionLog should have mapping variables for the above defined workflow
variables
such
as
$$wfname,
$$Stime,
$$Etime,
$$TSRows
and
$$TFRows.
=> This mapping m_sessionLog should use a dummy source and it must have a expression
transformation
and
a
target
=>
database
Audit
table)
=> Inside the expression you must assign the mapping variables to the output ports
workflowname=$$wfname
starttime=$$Stime
endtime=$$Etime
SucessRows=$$TSRows
FailedRows=$$TFRows
=>
Create
a
target
database
table
with
the
following
columns
Workflowname,
start
time,
end
time,
success
rows
and
failed
rows.
=> Connect all the required output ports to the target which is nothing but your audit table.
PRE-Session
Variable
=> Session 2: In the Pre-session variable assignment tab assign the mapping variable = workflow
variable
=>
In
our
case
$$wfname=$$workflowname
$$Stime=$$sessionStartTime
$$Etime=$$sessionEndTime
$$TSRows=$$TargetSuccessrows
$$TFRows=$$TargetFailedrows
Workflow Execution
Posted 8th February 2012 by Prafull Dangore
0
Add a comment
8.
DEC
30
define
$OutputFileName=your
the
file
path
file
in
path
Add a comment
parameter
file.
here
9.
DEC
27
In
properties
of
Update
Strategy
IIF(SAL<3000,DD_INSERT,DD_REJECT)
write
the
condition
like
this
Add a comment
10.
11.
DEC
27
09/09/2009
11/11/2010
Solution:
1. Connect SQF to an expression.
2. In expression make hire_date as input only and make another port hire_date1 as o/p port with date
data type.
3. In o/p port of hire_date write condition like as below
TO_DATE(TO_CHAR(hire_date),YYYYMMDD)
View comments
2.
DEC
26
string
Eg::
to
decimal
with
input
decimal
places
in
informatica?
data
Add a comment
3.
DEC
26
12345678
Add a comment
4.
5.
DEC
22
you
have
source
is
YEAR
like
this
DAYNO
--------01-JAN-07
301
2
01-JAN-08
200
Year column is a date and dayno is numeric that represents a day ( as in 365 for 31-Dec-Year).
Convert the Dayno to corresponding year's month and date and then send to target.
Target
E_NO
YEAR_MONTH_DAY
----------------------1
29-OCT-07
2
19-JUL-08
Solution:
Use below date format in exp transformation
------
E_NO
--------1
a
-
Add_to_date(YEAR,DD,DAYNO)
Posted 22nd December 2011 by Prafull Dangore
Add a comment
6.
DEC
22
Add a comment
7.
DEC
22
Add a comment
8.
9.
DEC
22
Add a comment
10.
DEC
22
Add a comment
11.
DEC
22
Find the 3rd MAX & MIN salary in the emp table
Scenario:
Find the 3rd MAX & MIN salary in the emp table
Solution:
Max select distinct sal from emp e1 where 3 =
(select count(distinct sal) from emp e2 where e1.sal <= e2.sal);
Min select distinct sal from emp e1 where 3 =
(select count(distinct sal) from emp e2 where e1.sal >= e2.sal);
Posted 22nd December 2011 by Prafull Dangore
0
Add a comment
12.
13.
DEC
22
Sql query to find EVEN & ODD NUMBERED records from a table.
Scenario:
Sql query to find EVEN & ODD NUMBERED records from a table.
Solution:
Even - select * from emp where rowid in (select decode(mod(rownum,2),0,rowid, null) from emp);
Odd - select * from emp where rowid in (select decode(mod(rownum,2),0,null ,rowid) from emp);
Add a comment
14.
DEC
22
select
ename,sal/12
as
monthlysal
from
emp;
Select
all
record
from
emp
table
where
deptno
=10
or
40.
select
*
from
emp
where
deptno=30
or
deptno=10;
Select
all
record
from
emp
table
where
deptno=30
and
sal>1500.
select
*
from
emp
where
deptno=30
and
sal>1500;
Select all record from emp where job not in SALESMAN or CLERK.
select
*
from
emp
where
job
not
in
('SALESMAN','CLERK');
Select all record from emp where ename in 'BLAKE','SCOTT','KING'and'FORD'.
select
*
from
emp
where
ename
in('JONES','BLAKE','SCOTT','KING','FORD');
Select all records where ename starts with S and its lenth is 6 char.
select
*
from
emp
where
ename
like'S____';
Select all records where ename may be any no of character but it should end with R.
select
*
from
emp
where
ename
like'%R';
Count
MGR
and
their
salary
in
emp
table.
select
count(MGR),count(sal)
from
emp;
In
emp
table
add
comm+sal
as
total
sal
.
select
ename,(sal+nvl(comm,0))
as
totalsal
from
emp;
Select
any
salary
<3000
from
emp
table.
select * from emp
where sal> any(select sal from emp where sal<3000);
Select
all
salary
<3000
from
emp
table.
select * from emp
where sal> all(select sal from emp where sal<3000);
Select all the employee group by deptno and sal in descending order.
select
ename,deptno,sal
from
emp
order
by
deptno,sal
desc;
How can I create an empty table emp1 with same structure as emp?
Create
table
emp1
as
select
*
from
emp
where
1=2;
How
to
retrive
record
where
sal
between
1000
to
2000?
Select
*
from
emp
where
sal>=1000
And
sal<2000
Select all records where dept no of both emp and dept table matches.
select * from emp where exists(select * from dept where emp.deptno=dept.deptno)
If there are two tables emp1 and emp2, and both have common record. How can I fetch all
the
recods
but
common
records
only
once?
(Select
*
from
emp)
Union
(Select
*
from
emp1)
How to fetch only common records from two tables emp and emp1?
(Select
*
from
emp)
Intersect
(Select
*
from
emp1)
How can I retrive all records of emp1 those should not present in emp2?
(Select
*
from
emp)
Minus
(Select
*
from
emp1)
Count the totalsa
deptno wise where more than 2 employees exist.
SELECT
deptno,
sum(sal)
As
totalsal
FROM
emp
GROUP
BY
deptno
HAVING COUNT(empno) > 2
Posted 22nd December 2011 by Prafull Dangore
0
Add a comment
15.
DEC
22
select
ename,sal/12
as
monthlysal
from
emp;
Select
all
record
from
emp
table
where
deptno
=10
or
40.
select
*
from
emp
where
deptno=30
or
deptno=10;
Select
all
record
from
emp
table
where
deptno=30
and
sal>1500.
select
*
from
emp
where
deptno=30
and
sal>1500;
Select all record from emp where job not in SALESMAN or CLERK.
select
*
from
emp
where
job
not
in
('SALESMAN','CLERK');
Select all record from emp where ename in 'BLAKE','SCOTT','KING'and'FORD'.
select
*
from
emp
where
ename
in('JONES','BLAKE','SCOTT','KING','FORD');
Select all records where ename starts with S and its lenth is 6 char.
select
*
from
emp
where
ename
like'S____';
Select all records where ename may be any no of character but it should end with R.
select
*
from
emp
where
ename
like'%R';
Count
MGR
and
their
salary
in
emp
table.
select
count(MGR),count(sal)
from
emp;
In
emp
table
add
comm+sal
as
total
sal
.
select
ename,(sal+nvl(comm,0))
as
totalsal
from
emp;
Select
any
salary
<3000
from
emp
table.
select * from emp
where sal> any(select sal from emp where sal<3000);
Select
all
salary
<3000
from
emp
table.
select * from emp
where sal> all(select sal from emp where sal<3000);
Select all the employee group by deptno and sal in descending order.
select
ename,deptno,sal
from
emp
order
by
deptno,sal
desc;
How can I create an empty table emp1 with same structure as emp?
Create
table
emp1
as
select
*
from
emp
where
1=2;
How
to
retrive
record
where
sal
between
1000
to
2000?
Select
*
from
emp
where
sal>=1000
And
sal<2000
Select all records where dept no of both emp and dept table matches.
select * from emp where exists(select * from dept where emp.deptno=dept.deptno)
If there are two tables emp1 and emp2, and both have common record. How can I fetch all
the
recods
but
common
records
only
once?
(Select
*
from
emp)
Union
(Select
*
from
emp1)
How to fetch only common records from two tables emp and emp1?
(Select
*
from
emp)
Intersect
(Select
*
from
emp1)
How can I retrive all records of emp1 those should not present in emp2?
(Select
*
from
emp)
Minus
(Select
*
from
emp1)
Count the totalsa
deptno wise where more than 2 employees exist.
SELECT
deptno,
sum(sal)
As
totalsal
FROM
emp
GROUP
BY
deptno
HAVING COUNT(empno) > 2
Posted 22nd December 2011 by Prafull Dangore
0
Add a comment
16.
17.
DEC
22
Explanation:
flat file
Relational table
view
synonyms
All of the above (correct)
Which value returned by NewLookupRow port says that Integration Service does not update or
insert the row in the cache?
Explanation:
3 (wrong)
2
1
0
Which one need a common key to join?
Explanation:
source qualifier
joiner (correct)
look up
Which one support hetrogeneous join?
Explanation:
source qualifier
joiner (correct)
look up
What is the use of target loader?
Explanation:
Target load order is first the data is load in dimension table and then fact table.
Target load order is first the data is load in fact table and then dimensional table.
Load the data from different target at same time. (wrong)
Which one is not tracing level?
Explanation:
terse
verbose
initialization
verbose initialization
terse initialization (correct)
Which output file is not created during session running?
Explanation:
Session log
workflow log
Error log
Bad files
cache files (correct)
Is Fact table is normalised ?
Explanation:
yes
no (correct)
Which value returned by NewLookupRow port says that Integration Service inserts the row into the
cache?
Explanation:
0 (wrong)
1
2
3
Which
transformation
only
works
Explanation:
lookup
Union
joiner
Sql (correct)
Which are both connected and unconnected?
Explanation:
External Store Procedure (omitted)
Stote Procedure (correct)
Lookup (correct)
Advanced External Procedure Transformation
Can we generate alpha-numeric value in sequence generator?
Explanation:
yes
no (correct)
Which transformation is used by cobol source?
on
relational
source?
Explanation:
Advanced External Procedure Transformation
Cobol Transformation
Unstructured Data Transformation
Normalizer (correct)
What is VSAM normalizer transformation?
Explanation:
The VSAM normalizer transformation is the source qualifier transformation for a COBOL source
definition.
The VSAM normalizer transformation is the source qualifier transformation for a flat file source
definition.
The VSAM normalizer transformation is the source qualifier transformation for a xml source
definition. (wrong)
Non of these
What is VSAM normalizer transformation?
Explanation:
The VSAM normalizer transformation is the source qualifier transformation for a COBOL source
definition.
The VSAM normalizer transformation is the source qualifier transformation for a flat file source
definition.
The VSAM normalizer transformation is the source qualifier transformation for a xml source
definition. (wrong)
Non of these
Add a comment
18.
DEC
22
Explanation:
No
Yes (correct)
What is a mapplet?
Explanation:
Combination of reusable transformation.
Combination of reusable mapping
Set of transformations and it allows us to reuse (correct)
Non of these
Explanation:
Sequence generator
Normalizer
Sql
Store Procedure (wrong)
Posted 22nd December 2011 by Prafull Dangore
0
Add a comment
19.
DEC
22
Scenario:
How large is the database,used and free space?
Solution:
select round(sum(used.bytes) / 1024 / 1024 / 1024 ) || ' GB' "Database Size"
,
round(sum(used.bytes) / 1024 / 1024 / 1024 ) round(free.p / 1024 / 1024 / 1024) || ' GB' "Used space"
,
round(free.p / 1024 / 1024 / 1024) || ' GB' "Free space"
from (select bytes
from v$datafile
union all
select bytes
from v$tempfile
union all
select bytes
from v$log) used
,
(select sum(bytes) as p
from dba_free_space) free
group by free.p
Posted 22nd December 2011 by Prafull Dangore
0
Add a comment
20.
21.
DEC
20
OR
@echo
off
for
/F
"tokens=2,3,4
delims=/
"
%%i
in
('date/t')
do
set
y=%%k
for /F "tokens=2,3,4 delims=/ " %%i in ('date/t') do set d=%%k%%i%%j
for /F "tokens=5-8 delims=:. " %%i in ('echo.^| time ^| find "current" ') do set t=%%i%%j
set
t=%t%_
if
"%t:~3,1%"=="_"
set
t=0%t%
set
t=%t:~0,4%
set
"theFilename=%d%%t%"
echo %theFilename%
Posted 20th December 2011 by Prafull Dangore
0
Add a comment
22.
DEC
19
PL/SQL
Declare :
Variable
Begin
Procedural
Block
contains
:
optional
declaration
Manadatory
statements.
Exception :
any
End :
5. What
are
Following
are
Scalar
BINARY_INTEGER
DEC
DECIMAL
DOUBLE
FLOAT
INT
INTEGER
NATURAL
NATURALN
NUMBER
NUMERIC
PLS_INTEGER
POSITIVE
POSITIVEN
REAL
SIGNTYPE
SMALLINT
CHAR
CHARACTER
LONG
LONG
NCHAR
NVARCHAR2
RAW
ROWID
STRING
UROWID
VARCHAR
VARCHAR2
DATE
INTERVAL
INTERVAL
TIMESTAMP
TIMESTAMP
TIMESTAMP
errors
the
to
datatypes
the
datatype
Optional
trapped
Mandatory
be
available
supported
in
in
PL/SQL
oracle
PLSQL
Types
PRECISION
RAW
DAY
YEAR
WITH
TO
TO
LOCAL
WITH
SECOND
MONTH
TIME
TIME
ZONE
ZONE
BOOLEAN
Composite
RECORD
TABLE
VARRAY
LOB
BFILE
BLOB
CLOB
Types
Types
NCLOB
Reference
REF
REF object_type
Types
CURSOR
6. What
are % TYPE and % ROWTYPE ? What are the advantages of using these over
datatypes?% TYPE provides the data type of a variable or a database column to that variable.
% ROWTYPE provides the record type that represents a entire row of a table or view or columns
selected in the cursor.
The advantages are :
I. Need not know about variable's data type
ii. If the database definition of a column in a table changes, the data type of a variable changes
accordingly.
Advantage is, if one change the type or size of the column in the table, it will be reflected in our
program
unit
without
making
any
change.
%type is used to refer the column's datatype where as %rowtype is used to refer the whole record in
a
table.
7. What is difference between % ROWTYPE and TYPE RECORD ?
% ROWTYPE is to be used whenever query returns a entire row of a table or view.
TYPE rec RECORD is to be used whenever query returns columns of different table or views and
variables.
E.g. TYPE r_emp is RECORD (eno emp.empno% type,ename emp ename
%type);
e_rec emp% ROWTYPE
cursor c1 is select empno,deptno from emp;
e_rec c1
%ROWTYPE.
8. What is PL/SQL table ?
A PL/SQL table is a one-dimensional, unbounded, sparse collection of homogenous elements, indexed
by integers
One-dimensional
A PL/SQL table can have only one column. It is, in this way, similar to a one-dimensional array.
Unbounded or Unconstrained
There is no predefined limit to the number of rows in a PL/SQL table. The PL/SQL table grows
dynamically as you add more rows to the table. The PL/SQL table is, in this way, very different from
an array.
Related to this definition, no rows for PL/SQL tables are allocated for this structure when it is defined.
Sparse
In a PL/SQL table, a row exists in the table only when a value is assigned to that row. Rows do not
have to be defined sequentially. Instead you can assign a value to any row in the table. So row 15
could have a value of `Fox' and row 15446 a value of `Red', with no other rows defined in between.
Homogeneous elements
Because a PL/SQL table can have only a single column, all rows in a PL/SQL table contain values of the
same datatype. It is, therefore, homogeneous.
With PL/SQL Release 2.3, you can have PL/SQL tables of records. The resulting table is still, however,
homogeneous. Each row simply contains the same set of columns.
Indexed by integers
PL/SQL tables currently support a single indexing mode: by BINARY_INTEGER. This number acts as
the "primary key" of the PL/SQL table. The range of a BINARY_INTEGER is from -231-1 to 231-1, so
you have an awful lot of rows with which to work
9. What is a cursor ? Why Cursor is required ?
Cursor is a named private SQL area from where information can be
Cursors are required to process rows individually for queries returning multiple rows.
10. Explain the two type of Cursors ?
accessed.
implicit cursor: implicit cursor is a type of cursor which is automatically maintained by the Oracle
server
itself.implicit
cursor
returns
only
one
row.
it
has
for
UPDATE table_name
SET set_clause WHERE CURRENT OF cursor_name;DELETE
table_name WHERE CURRENT OF cursor_name;
FROM
Notice that the WHERE CURRENT OF clause references the cursor and not the record into which the
next fetched row is deposited.
The most important advantage to using WHERE CURRENT OF where you need to change the row
fetched last is that you do not have to code in two (or more) places the criteria used to uniquely
identify a row in a table. Without WHERE CURRENT OF, you would need to repeat the WHERE clause of
your cursor in the WHERE clause of the associated UPDATEs and DELETEs. As a result, if the table
structure changes in a way that affects the construction of the primary key, you have to make sure
that each SQL statement is upgraded to support this change. If you use WHERE CURRENT OF, on the
other hand, you only have to modify the WHERE clause of the SELECT statement.
This might seem like a relatively minor issue, but it is one of many areas in your code where you can
leverage subtle features in PL/SQL to minimize code redundancies. Utilization of WHERE CURRENT OF,
%TYPE, and %ROWTYPE declaration attributes, cursor FOR loops, local modularization, and other
PL/SQL language constructs can have a big impact on reducing the pain you may experience when you
maintain your Oracle-based applications.
Let's see how this clause would improve the previous example. In the jobs cursor FOR loop above, I
want to UPDATE the record that was currently FETCHed by the cursor. I do this in the UPDATE
statement by repeating the same WHERE used in the cursor because (task, year) makes up the
primary key of this table:
This is a less than ideal situation, as explained above: I have coded the same logic in two places, and
this code must be kept synchronized. It would be so much more convenient and natural to be able to
code the equivalent of the following statements:
Delete the record I just fetched.
or:
Update these columns in that row I just fetched.
A perfect fit for WHERE CURRENT OF! The next version of my winterization program below uses this
clause. I have also switched to a simple loop from FOR loop because I want to exit conditionally from
the loop:
DECLARE
CURSOR fall_jobs_cur IS SELECT ... same as before
fall_jobs_cur%ROWTYPE;BEGIN
OPEN
fall_jobs_cur;
fall_jobs_cur
INTO
fall_jobs_cur%NOTFOUND
THEN
EXIT;
ELSIF
job_rec.do_it_yourself_flag = 'YOUCANDOIT'
THEN
SET
responsible
=
'STEVEN'
WHERE
fall_jobs_cur;
COMMIT;
EXIT;
END
LOOP;
CLOSE fall_jobs_cur;END;
... ;
job_rec
LOOP
FETCH
job_rec;
IF
UPDATE winterize
CURRENT
OF
IF;
END
16. What
is
a
database
trigger
?
Name
some
usages
of
database
trigger
?
A database trigger is a stored procedure that is invoked automatically when a predefined event occurs.
Database triggers enable DBA's (Data Base Administrators) to create additional relationships
between
separate
databases.
For example, the modification of a record in one database could trigger the modification of a record
in
a
second
database.
17. How many types of database triggers can be specified on a table ? What are they ?
Insert
Update
Before Row
After Row
o.k.
o.k.
Before Statement
After Statement
Delete
o.k.
o.k.
o.k.
o.k.
o.k.
o.k.
o.k.
o.k.
o.k.
o.k.
If FOR EACH ROW clause is specified, then the trigger for each Row affected by the statement.
If WHEN clause is specified, the trigger fires according to the returned Boolean value.
the different types of triggers: * Row Triggers and Statement Triggers * BEFORE and AFTER Triggers
* INSTEAD OF Triggers * Triggers on System Events and User Events
18. What are two virtual tables available during database trigger execution ?
The table columns are referred as OLD.column_name and NEW.column_name.
For triggers related to INSERT only NEW.column_name values only available.
For triggers related to UPDATE only OLD.column_name NEW.column_name values only available.
For triggers related to DELETE only OLD.column_name values only available.
The two virtual table available are old and new.
19.What happens if a procedure that updates a column of table X is called in a database trigger of the same table ?
To avoid the mutation table error ,the procedure should be declared as an AUTONOMOUS
TRANSACTION.
By
this
the
procedure
will
be
treated
as
an
separate
identity.
20. Write the order of precedence for validation of a column in a table ?
I. done using Database triggers.
ii. done using Integarity Constraints.
21. What is an Exception ? What are types of Exception ?
Predefined
Do not declare and allow the Oracle server to raise implicitly
NO_DATA_FOUND
TOO_MANY_ROWS
INVALID_CURSOR
ZERO_DIVIDE
INVALID_CURSOR
WHEN EXCEPTION THEN
Non predefined
o
o
Declare within the declarative section and allow allow Oracle server
to raise implicitly
SQLCODE Returns the numeric value for the seeor code
SQLERRM Returns the message associated with error number
DECLARE -- PRAGMA EXCEPTION_INIT (exception, error_number)
RAISE WHEN EXCEPTION_NAME THEN
User defined
Declare within the declarative section and raise explicitly.
IF confidition the
RAISE EXCEPTION or RAISE_APPLICATION_ERROR
22. What
is
Pragma
EXECPTION_INIT
?
Explain
the
usage
?
Pragma exception_init Allow you to handle the Oracle predefined message by you'r own message.
means you can instruct compiler toassociatethe specific message to oracle predefined message at
compile time.This way you Improve the Readbility of your program,and handle it accoding to your
own
way.
It
should
be declare
at
the
DECLARE
section.
example
declare
salary
number;
FOUND_NOTHING
exception;
Pragma
exception_init(FOUND_NOTHING
,100);
begin
select
sal
in
to
salaryfrom
emp
where
ename
='ANURAG';
dbms_output.put_line(salary);
exception
WHEN
FOUND_NOTHING
THEN
dbms_output.put_line(SQLERRM);
end;
23. What
is
Raise_application_error
?
Raise_application_error is used to create your own error messages which can be more descriptive
than
named
exceptions.
Syntax
is:Raise_application_error
(error_number,error_messages);
where
error_number
is
between
-20000
to
-20999..
24. What
are
the
return
values
of
functions
SQLCODE
and
SQLERRM
?
Pl / Sql Provides Error Information via two Built-in functions, SQLCODE & SQLERRM.
SQLCODE
Returns
the
Current
Error
Code.
Returns
1.
SQLERRM
Returns
the
Current
Error
Message
Text.
Returns
"
User
Defined
Exception
"
25. Where
the
Pre_defined_exceptions
are
stored
?
PL/SQL
declares
predefined
exceptions
in
the STANDARD package.
26. What
is
a
stored
Stored Procedure is the PlSQL subprgram stored in the databasse .
procedure
Stored Procedure
A program running in the database that can take complex actions based on the inputs you send it.
Using a stored procedure is faster than doing the same work on a client, because the program runs
right inside the databaseserver. Stored procedures are nomally written in PL/SQL or Java.
advantages
fo
Stored
Procedure
Extensibility,Modularity,
Reusability,
Maintainability
and
one
time
compilation.
28. What are the modes of parameters that can be passed to a procedure ?
1.in:
in parameter mode is used to pass values to subprogram when invoked.
2.out:
out
is
used
to
return
values
to
callers
of
subprograms
3.in
out:
it
is
used
to
define
in
and
out
29. What are the two parts of a procedure ?
PROCEDURE name (parameter list.....)
is
local variable declarations
BEGIN
Executable statements.
Exception.
exception handlers
end;
Procedure
can
a)
CALL
b)
c)
EXCECUTE
<Procedure
Functions
be
called
in
in
the
<procedure
<procedure
name>
can
called
from
be
from
procedures
called
PL/SQL
following
name>
name>
other
in
or
the
ways
direc
calling
functions
block
environment
or
packages
following
ways
a) EXCECUTE <Function name> from calling environment. Always use a variable to get the return
value.
b)
As
part
33. What are two parts of package ?
of
an
SQL/PL
SQL
Expression
The two parts of package are PACKAGE SPECIFICATION & PACKAGE BODY.
Package Specification contains declarations that are global to the packages and local to the schema.
Package Body contains actual procedures and local declaration of the procedures and cursor
declarations.
33.What is difference between a Cursor declared in a procedure and Cursor declared in a package
specification ?
A cursor declared in a package specification is global and can be accessed by other procedures or
procedures in a package.
A cursor declared in a procedure is local to the procedure that can not be accessed by other
procedures.
USER_OBJECTS,
ALL_OBJECTS,
DBA_OBJECTS
b)
USER_SOURCE,
ALL_SOURCE,
DBA_SOURCE
c)
USER_DEPENCENCIES
Overloading procs are 2 or more procs with the same name but different arguments.
Arguments needs to be different by class it self. ie char and Varchar2 are from same class.
Packages
The
main
advantages
of
packages
are
1- Since packages has specification and body separate so, whenever any ddl is run and if any
proc/func(inside pack) is dependent on that, only body gets invalidated and not the spec. So any
other
proc/func
dependent
on
package
does
not
gets
invalidated.
2- Whenever any func/proc from package is called, whole package is loaded into memory and hence
all objects of pack is availaible in memory which means faster execution if any is called. And since
we put all related proc/func in one package this feature is useful as we may need to run most of the
objects.
3- we can declare global variables in the package
38.Is it possible to use Transaction control Statements such a ROLLBACK or COMMIT in Database Trigger ?
Why ?
Autonomous Transaction is a feature of oracle 8i which maintains the state of
its transactions and save it , to affect with the commit or rollback of the
surrounding transactions.
Here is the simple example to understand this :ora816
SamSQL
:>
declare
Procedure
InsertInTest_Table_B
is
BEGIN
INSERT
into
Test_Table_B(x)
values
(1);
Commit;
END
8
9
BEGIN
INSERT
INTO
Test_Table_A(x)
values
(123);
10
InsertInTest_Table_B;
11
Rollback;
12
END;
13
ora816
/
SamSQL
PL/SQL
:>
procedure
Select
from
successfully
Test_Table_A;
completed.
X----------
123
A function always return a values while procedure can return one or more values through
Parameters.
A function can call directly by sql statement like select "func_name" from dual while procedure
cannot.
40. What is Data Concarency and Consistency?
Concurrency
How well can multiple sessions access the same data simultaneously
Consistency
How consistent is the view of the data between and within multiple sessions, transactions or
statements
41. Talk about "Exception Handling" in PL/SQL?
the
exception
are
written
to
handle
the
exceptions
thrown
by
programs.
we
have
user
defined
and
system
exception.
user defined exception are the exception name given by user (explicitly decalred and used) and they
are
raised
to
handle
the
specific
behaviour
of
program.
system exceptions are raised due to invalid data(you dont have to deaclre these). few examples are
when no_data_found, when others etc.
44. Can we use commit or rollback command in the exception part of PL/SQL block?
Yes, we can use the TCL commands(commit/rollback) in the exception block of a stored
procedure/function. The code in this part of the program gets executed like those in the body without
any restriction. You can include any business functionality whenever a condition in main block(body of
a proc/func) fails and requires a follow-thru process to terminate the execution gracefully!
DECALRE
..
BEGIN
.
EXCEPTION
WHEN NO_DATA_FOUND THEN
INSERT INTO err_log(
err_code, code_desc)
VALUES(1403, No data found)
COMMIT;
RAISE;
END
46. What is bulk binding please explain me in brief ?
Bulk Binds (BULK COLLECT , FORALL ) are a PL/SQL technique where, instead of multiple
individual SELECT, INSERT, UPDATE or DELETE statements are executed to retrieve from, or
store data in, at table, all of the operations are carried out at once, in bulk.
This avoids the context-switching you get when the PL/SQL engine has to pass over to the SQL
engine, then back to the PL/SQL engine, and so on, when you individually access rows one at a
time. To do bulk binds with Insert, Update and Delete statements, you enclose the SQL statement
within
a
PL/SQL FORALL
statement.
To do bulk binds with Select statements, you include the Bulk Collect INTO a collection clause in
the
SELECT Statement
instead
of
using
Simply
into
.
Collections, BULK COLLECT and FORALL are the new features in Oracle 8i, 9i and 10g PL/SQL
that
can
really
make
a
different
to
you
PL/SQL
performance
Bulk Binding is used for avoiding the context switching between the sql engine and pl/sql engine. If
we use simple For loop in pl/sql block it will do context switching between sql and pl/sql engine for
each
row
processing
that
degrades
the
performance
of
pl/sql
bloack.
So that for avoiding the context switching betn two engine we user FORALL keyword by using the
collection
pl/sql
tables
for
DML.
forall
is
pl/sql
keyword.
It will provides good result and performance increase.
47.Why Functions are used in oracle ?Can Functions Return more than 1 values?Why Procedures are used
in oracle ?What are the Disadvantages of packages?What are the Global Variables in Packages?
The functions are used where we can't used the procedure.i.e we can use a function the in select
statments,in the where clause of delete/update statments.But the procedure can't used like that.
It is true that function can return only one value, but a function can be used to return more than one
value,by
using
out
parameters
and
also
by
using
ref
cursors.
There is no harm in using the out parameter,when functins are used in the DML statements we can't
used the out parameter(as per rules).
49. What are the restrictions on Functions ?
Function cannot have DML statemets and we can use select statement in function
If you create function with DML statements we get message function will be created
But if we use in select statement we get error
50. What happens when a package is initialized ?
when a package is initialised that is called for the first time the entire package is loaded into SGA and
any variable declared in the package is initialises.
52. What is PL/SQL table?
Pl/Sql table is a type of datatype in procedural language Extension.It has two columns.One for the
index,say Binary index And another column for the datas,which might further extend to any number of
rows (not columns)in future.
PL/SQL table is nothing but one dimensional array. It is used to hold similar type of data for
temporary storage. Its is indexed by binary integer.
3. can i write plsql block inside expection
Yes you can write PL/SQL block inside exception section. Suppose you want to insert the exception
detail into your error log table, that time you can write insert into statement in exception part. To
handle the exception which may be raised in your exception part, you can write the PL/SQL code in
exception part.
54. Can e truncate some of the rows from the table instead of truncating the full table.
You can truncate few rows from a table if the table is partitioned. You can truncate a single partition
and keep remaining.
CREATE
state
sales
PARTITION
PARTITION
TABLESPACE
PARTITION
TABLESPACE
INSERT
INSERT
INSERT
INSERT
COMMIT;
SELECT
ALTER
TRUNCATE
TABLE
BY
northwest
southwest
INTO
INTO
INTO
INTO
parttab
parttab
parttab
parttab
parttab
LIST
VALUES
(state)
('OR',
VALUES
VALUES
VALUES
VALUES
VALUES
FROM
TABLE
PARTITION
('AZ',
('OR',
('WA',
('AZ',
('CA',
(
VARCHAR2(2),
NUMBER(10,2))
(
'WA')
uwdata,
'CA')
uwdata);
100000);
200000);
300000);
400000);
parttab;
parttab
southwest;
SELECT
*
FROM
parttab;
56. What
is
the
difference
between
a
reference
cursor
and
normal
cursor
?
REF cursors are different than your typical, standard cursors. With standard cursors, you know the
cursor's query ahead of time. With REF cursors, you do not have to know the query ahead of time.
With
REF
Cursors,
you
can
build
the
cursor
on
the
fly
Normal
Cursor
is
a
Static
Cursor.
Refernce
Cursor
is
used
to
create
dynamic
cursor.
There
are
two
types
of
Ref
Cursors:
1.
Weak
cursor
and
2.Strong
cursor
Type
ref_name
is
Ref
cursor
[return
type]
[return
type]
means
%Rowtype
if Return type is mentioned then it is Strong cursor else weak cursor
The
Reference
cursor
does
not
support
For
update
clause.
Normal
cursor
is
used
to
process
more
than
one
record
in
plsql.
Refcusor is a type which is going to hold set of records which can be sent out through the procedure
or
function
out
variables.
we
can
use
Ref
cursor
as
an
IN
OUT
parameter
.
58. Based on what conditions can we decide whether to use a table or a view or a materialized view ?
Table is the basic entity in any RDBMS , so for storing data you need table .
for view - if you have complex query from which you want to extract data again and again , moreover
it is a standard data which is required by many other user also for REPORTS generation then create
view . Avoid to insert / update / delete through view unless it is essential. keep view as read only
(FOR
SHOWING
REPORTS)
for materialized view - this view ia mainly used in datawarehousing . if you have two databases and
you want a view in both databases , remember in datawarehousing we deal in GB or TB datasize .
So create a summary table in a database and make the replica(materialized view) in other
database.
when
to
create
materialized
view[1] if data is in bulk and you need same data in more than one database then create summary table
at
one
database
and
replica
in
other
databases
[2]
if
you
have
summary
columns
in
projection
list
of
query.
main
advatages
of
materialized
view
over
simple
view
are
[1] it save data in database whether simple view's definition is saved in database
[2] can create parition or index on materialize view to enhance the performance of view , but cannot
on
simple
view.
59. What is the difference between all_ and user_ tables ?
An ALL_ view displays all the information accessible to the current user, including information from
the current user's schema as well as information from objects in other schemas, if the current user
has access to those objects by way of grants of privileges or roles.
While
A USER_ view displays all the information from the schema of the current user. No special
privileges
are
required
to
query
these
views.
User_tables data dictionary contains all the tables created by the users under that schema.
whereas All_tables stores all the tables created in different schema. If any user id have the Grants
for access table of diff. schema then he can see that table through this dictionary.
61. what
is
p-code
and
sourcecode
?
P-code is Pre-complied code stored in Public cache memory of System Global Area after the Oracle
instance is started, whereas sourcecode is a simple code of sp, package, trigger, functions etc which
are stored in Oracle system defined data dictionary. Every session of oracle access the p-code
which
have
the
Execute
permission
on
that
objects.
Source code stored in user_objects data dictinary for user defined Store proc, Trigger, Package,
Function. DBA_object stores all the db objects in sp. DB. ALL_objects stores all the db objects in sp.
schema.
Source code: The code say a PLSQL block that the user types for the exectionP-Code: The source
code after -Syntax check, Parse tree generation, Symantic check, and further execution of the parse
tree..giving the final P-code ready for data fetch or manipulation ...
63. Is there any limitation on no. of triggers that can be created on a table?
There
is
no
you can write as
if table has got n
limit
on
number
of
triggers
on
one
many u want for insert,update or delte by diff
columns. we can create n triggers based on each
table.
names.
column.
64.What happens when DML Statement fails?A.User level rollbackB.Statement Level RollbackC.Sustem evel
Rollback
When a DML statement executes (fails/sucess) an automatic Commit is executed. Eg : Create a
table t1. Insert a record in t1. Then again to create the same object t1.
65.What steps should a programmer should follow for better tunning of the PL/SQL blocks?
SQL Queries Best Practices
1.
Always use the where clause in your select statement to narrow the number of rows returned.
If we dont use a where clause, the Oracle performs a full table scan on our table and returns all of the
rows.
2.
Use EXISTS clause instead of IN clause as it is more efficient than IN and performs faster.
Ex:
Replace
SELECT * FROM DEPT WHERE DEPTNO IN
(SELECT DEPTNO FROM EMP E)
With
SELECT * FROM DEPT D WHERE EXISTS
(SELECT 1 FROM EMP E WHERE D.DEPTNO = E.DEPTNO)
Note: IN checks all rows. Only use IN if the table in the sub-query is extremely small.
3.
When you have a choice of using the IN or the BETWEEN clauses in your SQL, use the BETWEEN
clause as it is much more efficient than IN.
Depending on the range of numbers in a BETWEEN, the optimizer will choose to do a full table scan or
use the index.
4.
Avoid WHERE clauses that are non-sargable. Non-sargable search arguments in the WHERE clause,
such as "IS NULL", "OR", "<>", "!=", "!>", "!<", "NOT", "NOT EXISTS", "NOT IN", "NOT LIKE", and
"LIKE %500" can prevent the query optimizer from using an index to perform a search. In addition,
expressions that include a function on a column, or expressions that have the same column on both
sides of the operator, are not sargable.
Convert multiple OR clauses to UNION ALL.
5.
Use equijoins. It is better if you use with indexed column joins. For maximum performance when
joining two or more tables, the indexes on the columns to be joined should have the same data type.
6.
Avoid a full-table scan if it is more efficient to get the required rows through an index. It decides full
table scan if it has to read more than 5% of the table data (for large tables).
7.
Avoid using an index that fetches 10,000 rows from the driving table if you could instead use another
index that fetches 100 rows and choose selective indexes.
8.
Indexes can't be used when Oracle is forced to perform implicit datatype conversion.
9.
l
l
Choose the join order so you will join fewer rows to tables later in the join order.
use smaller table as driving table
have first join discard most rows
10.
Set up the driving table to be the one containing the filter condition that eliminates the highest
percentage of the table.
11.
In a where clause (or having clause), constants or bind variables should always be on the right hand
side of the operator.
12.
Do not use SQL functions in predicate clauses or WHERE clauses or on indexed columns, (e.g.
concatenation, substr, decode, rtrim, ltrim etc.) as this prevents the use of the index. Use function
based indexes where possible
If you want the index used, dont perform an operation on the field.
Replace
SELECT * FROM EMPLOYEE WHERE SALARY +1000 = :NEWSALARY
With
SELECT * FROM EMPLOYEE WHERE SALARY = :NEWSALARY 1000
14.
All SQL statements will be in mixed lower and lower case. All reserve words will be capitalized and all
user-supplied objects will be lower case. (Standard)
15.
16.
Replace
SELECT * FROM A WHERE A.CITY IN (SELECT B.CITY FROM B)
With
SELECT A.* FROM A, B WHERE A.CITY = B.CITY
17.
Replace Outer Join with Union if both join columns have a unique index:
Replace
SELECT A.CITY, B.CITY FROM A, B WHERE A.STATE=B.STATE (+)
With
SELECT A.CITY, B.CITY FROM A, B
WHERE A.STATE=B.STATE
UNION
SELECT NULL, B.CITY FROM B WHERE NOT EXISTS
(SELECT 'X' FROM A.STATE=B.STATE)
18.
Use bind variables in queries passed from the application (PL/SQL) so that the same query can
be reused. This avoids parsing.
19.
Use Parallel Query and Parallel DML if your system has more than 1 CPU.
20.
Match SQL where possible. Applications should use the same SQL statements wherever
possible to take advantage of Oracle's Shared SQL Area. The SQL must match exactly to take
advantage of this.
21.
No matter how many indexes are created, how much optimization is done to queries or how
many caches and buffers are tweaked and tuned if the design of a database is faulty, the performance
of the overall system suffers. A good application starts with a good design.
22.
Also the order in which the conditions are given in the 'WHERE' cluase are very important while
performing a 'Select' query. The Performance Difference is unnoticed ifother wise the query is run on
a
Massive
Database.
For
example
for
a
select
statement,
SELECT Emp_id FROM Emp_table WHERE Last_Name = 'Smith' AND Middle_Initial = 'K' AND
Gender
=
'Female';
The look up for matches in the table is performed by taking the conditions in the WHERE cluase in
the reverse order i.e., first all the rows that match the criteria Gender = 'Female' are returned and in
these
returned
rows,
the
conditon
Last_Name
=
'Smith'
is
looked
up.
There fore, the order of the conditions in the WHERE clause must be in such a way that the last
condition gives minimum collection of potential match rows and the next condition must pass on
even little and so on. So, if we fine tune the above query, it should look like,
SELECT Emp_id FROM Emp_table WHERE Gender = 'Female' AND Middle_Initial = 'K' AND
Last_Name = 'Smith' ; as Last_Name Smith would return far more less number of rows than
Gender
=
'Female'
as
in
the
former
case.
66.what is difference between varray and nested table.can u explain in brief and clear my these
concepts.also
give
a
small
and
sweet
example
of
both
these.
Varry and Nestead table both are belongs to CollectionsThe Main difference is Varray has Upper
bound, where as Nestead table doesn't. It's size is unconstrained like any other database
tableNestead table can be stored in DatabaseSyntax of Nestead TableTYPE nes_tabtype IS TABLE
OF emp.empno%type;nes_tab nes_tabtype;Syntax of VarryTYPE List_ints_t IS VARRAY(8) OF
NUMBER(2);aList
List_ints_t:=List_ints_t(2,3,5,1,5,4);
Nested
table
can
be
indexed
where
as
VArray
can't.
69. What
is
PL/Sql
tables?Is
cursor
variable
store
in
PL/SQL
table?
pl/sql table is temparary table which is used to store records temrparaily in PL/SQL Block, whenever
block
completes
execution,
table
is
also
finished.
71. What
is
the
DATATYPE
of
PRIMARY
KEY
Binary
Integer
72.What is the difference between User-level, Statement-level and System-level Rollback? Can you please give me
example
of
each?
1. System
Rollback
behavior
2.
level
or
the current transaction entirely on
of old drivers becauase PG has
transaction
level
errors. This was the unique
no savepoint functionality until
8.0.
Statement
Rollback the current (ODBC) statement on errors (in case of 8.0 or later
version servers). The driver calls a SAVEPOINT command just before starting
each (ODBC) statement and automatically ROLLBACK to the savepoint on errors
or RELEASE it on success. If you expect Oracle-like automatic per statement
rollback,
please
use
this
level.
3. User Level
You can(have to) call some SAVEPOINT commands and rollback to a savepoint
on errors by yourself. Please note you have to rollback the current
transcation or ROLLBACK to a savepoint on errors (by yourself) to continue
the
application
74. Details
about
FORCE
VIEW
why
and
we
can
use
Generally we are not supposed to create a view without base table. If you want to create any view
without
base
table
that
is
called
as
Force
View
or
invalid
view.
Syntax:
CREATE
FORCE
VIEW
AS
<
SELECT
STATMENT
>;
That
View
will
be
created
with
the
message
View
created
with
compilation
errors
Once you create the table that invalid view will become as valid one.
75. 1) Why it is recommonded to use INOUT instead of OUT parameter type in a procedure?
2) What happen if we will not assign anything in OUT parameter type in a procedure?
Hi,OUT parameter will be useful for returning the value from subprogram, value can be assigned
only once and this variable cannot be assigned to another variable.IN OUT parameter will be used to
pass the value to subprogram and as well as it can be used to return the value to caller of
subprogram. It acts as explicitly declared variable. Therefore it can be assigned value and its value
can be assigned to another variable.So IN OUT will be useful than OUT parameter.
1) IN OUT and OUT selection criteria depends upon the program need.if u want to retain the value that
is being passed then use seperate (IN and OUT)otherwise u can go for IN OUT.2)If nothing is
assigned to a out parameter in a procedure then NULL will be returned for that parameter.
78. What
is
autonomous
Transaction?
Where
are
they
used?
Autonomous transaction is the transaction which acts independantly from the calling part and could
commit
the
process
done.
example using prgma autonomous incase of mutation problem happens in a trigger.
79. How can I speed up the execution of query when number of rows in the tables increased
Standard
practice
is
1.
Indexed
the
columns
(Primary
key)
2.
Use
the
indexed
/
Primary
key
columns
in
the
where
clause
3. check the explain paln for the query and avoid for the nested loops / full table scan (depending on
the
size
of
data
retrieved
and
/
or
master
table
with
few
rows)
80. What
is
the
purpose
of
FORCE
while
creating
a
VIEW
usually the views are created from the basetable if only the basetable exists.
The purpose of FORCE keyword is to create a view if the underlying base table doesnot exists.
ex
:
create
or
replace
FORCE
view
<viewname>
as
<query>
while using the above syntax to create a view the table used in the query statement doesnot
necessary
to
exists
in
the
database
83. What
is
Mutation
of
a
trigger?
why
and
when
does
it
oocur?
A table is said to be a Mutating table under the following three circumstances
1) When u try to do delete, update, insert into a table through a trigger and at the same time u r
trying
to
select
the
same
table.
2)
The
same
applies
for
a
view
3) Apart from that, if u r deleting (delete cascade),update,insert on the parent table and doing a
select
in
the
child
tableAll
these
happen
only
in
a
row
level
trigger
90. How
to
handle
exception
in
Bulk
collector?
During bulk collect you can save the exception and then you can process the exception.
Look at the below given example:
DECLARE
TYPE NumList IS TABLE OF NUMBER;
num_tab NumList := NumList(10,0,11,12,30,0,20,199,2,0,9,1);
errors NUMBER;
BEGIN
FORALL i IN num_tab.FIRST..num_tab.LAST
SAVE EXCEPTIONS
DELETE * FROM emp WHERE sal > 500000/num_tab(i);
EXCEPTION WHEN OTHERS THEN
-- this is not in the doco, thanks to JL for
pointing this out
errors := SQL%BULK_EXCEPTIONS.COUNT;
dbms_output.put_line('Number of errors is ' || errors);
FOR
i
IN
1..errors
SQL%BULK_EXCEPTIONS(i).ERROR_INDEX;
SQL%BULK_EXCEPTIONS(i).ERROR_CODE;
END LOOP;END;
LOOP
--
-Iteration
Error
code
is
is
91.#1 What are the advantages and disadvantages of using PL/SQL or JAVA as the primary programming
tool
for
database
automation.
#2 Will JAVA replace PL/SQL?
Internally the Oracle database supports two procedural languages, namely PL/SQL and Java. This
leads to questions like "Which of the two is the best?" and "Will Oracle ever desupport PL/SQL in
favour of Java?".
Many Oracle applications are based on PL/SQL and it would be difficult of Oracle to ever desupport
PL/SQL. In fact, all indications are that PL/SQL still has a bright future ahead of it. Many
enhancements are still being made to PL/SQL. For example, Oracle 9iDB supports native compilation
of Pl/SQL code to binaries.
PL/SQL and Java appeal to different people in different job roles. The following table briefly describes
the difference between these two language environments:
PL/SQL:
Data centric and tightly integrated into the database
Proprietary to Oracle and difficult to port to other database systems
Data manipulation is slightly faster in PL/SQL than in Java
Easier to use than Java (depending on your background)
Java:
Open standard, not proprietary to Oracle
Incurs some data conversion overhead between the Database and Java type systems
Java is more difficult to use (depending on your background)
110.
1.What
is
bulk
2.What
is
instead
3.What
is
the
difference
between
Oracle
table
4.What
R
built
in
Packages
5.what is the difference between row migration & row changing?
&
PL/SQL
in
collect?
trigger
table?
Oracle?
1.What
is
bulk
collect?
Bulk collect is part of PLSQL collection where data is stored/ poped up into a variable.
example:
declare
type
sal_rec
is
table
of
number;
v_sal
sal_rec;
begin
select
sal
bulk
collect
into
v_sal
from
emp;
for
r
in
1..
v_sal.count
loop
dbms_output.put_line(v_sal(r));
end
loop;
end;
2.What
is
instead
trigger
instead
triggers
are
used
for
views.
3.What
is
the
difference
between
Oracle
table
&
PL/SQL
table?
Table is logical entity which holds the data in dat file permanently . where as scope of plsql table is
limited to the particular block / procedure . refer above example sal_rec table will hold data only till
programme
is
reaching
to
end;
4.What
built
in
Packages
in
Oracle?
There
R
Dbms_output,
more
then
1000
dbms_utility
oracle
builtin
dbms_pipe
packges
like:
.............
5.what
is
the
difference
between
row
migration
&
row
changing?
Migration: The
data
is
stored
in
blocks
whic
use
Pctfree
40%
and pctused 60% ( normally). The 40% space is used for update and delete statements . when a
condition may arise that update/delete statement takes more then pctfree then it takes the space
from
anther
block.
this
is
called
migration.
RowChaining: while inserting the data if data of one row takes more then one block then this row is
stored
in
two
blocks
and
rows
are
chained.
insted of triggers: They provide a transparent way of modifying view that can't be modified directly
through
SQL,DML
statement.
111.Can anyone tell me the difference between instead of trigger, database trigger, and schema trigger?
INSTEAD OF Trigger control operation on view , not table. They can be used to make nonupdateable views updateable and to override the behvior of view that are updateable.
Database triggers fire whenever the database startup or is shutdown, whenever a user logs on or log
off, and whenever an oracle error occurs. these tigger provide a means of tracking activity in the
database
if we have created a view that is based on join codition then its not possibe to apply dml operations
like insert, update and delete on that view. So what we can do is we can create instead off trigger
and
perform
dml
operations
on
the
view.
131. HI,What
is
Flashback
query
in
Oracle9i...?
Flahsback is used to take your database at old state like a system restore in windows. No DDL and
DML
is
allowed
when
database
is
in
flashback
condition.
user
should
have
execute
permission
on
dbms_flashback
package
for
example:
at
1030
am
from
scott
user
:
delete
from
emp;
commit;
at
1040
am
I
want
all
my
data
from
emp
table
then
?
declare
cursor
c1
is
select
*
from
emp;
emp_cur
emp%rowtype;
begin
dbms_flashback.enable_at_time(sysdate
15/1440);
open
c1;
dbms_flashback.disable;
loop
fetch
c1
into
emp_cur;
exit
when
c1%notfound;
insert
into
emp
values(emp_cur.empno,
emp_cur.ename,
emp_cur.job,
emp_cur.mgr,emp_cur.hiredate,
emp_cur.sal,
emp_cur.comm,
emp_cur.deptno);
end
loop;
commit;
end;
/
select
*
from
emp;
14
rows
selected
132. what
is
the
difference
between
database
server
and
data
dictionary
Database
server is
collection
of
all
objects
of
oracle
Data Dictionary contains the information of for all the objects like when created, who created etc.
Database server is a server on which the instance of oracle as server runs..whereas datadictionary
is the collection of information about all those objects like tables indexes views triggers etc in a
database..
134. Mention the differences between aggregate functions and analytical functions clearly with examples?
Aggregate
functions
are
sum(),
count(),
avg(),
max(),
min()
like:
select
sum(sal)
,
count(*)
,
avg(sal)
,
max(sal)
,
min(sal)
from
emp;
analytical
fuction
differ
from
aggregate
function
some
of
examples:
SELECT
ename
"Ename",
deptno
"Deptno",
sal
"Sal",
SUM(sal)
OVER
(ORDER
BY
deptno,
ename)
"Running
Total",
SUM(SAL)
OVER
(PARTITION
BY
deptno
ORDER
BY
ename)
"Dept
Total",
ROW_NUMBER()
OVER
(PARTITION
BY
deptno
ORDER
BY
ENAME)
"Seq"
FROM
emp
ORDER
BY
deptno,
ename
SELECT
SELECT
PARTITION
)
)
WHERE
*
deptno,
BY
ename,
OVER
deptno
ORDER
Top3
Top3
FROM
sal,
BY
FROM
<=
(
ROW_NUMBER()
(
sal
DESC
emp
3
136. what
are
the
advantages
&
disadvantages
of
packages
?
Modularity,Easier Application Design,Information Hiding,Added Functionality,Better Performance,
Disadvantages of Package - More memory may be required on the Oracle database server when
using Oracle PL/SQL packages as the whole package is loaded into memory as soon as any object
in
the
package
is
accessed.
Disadvantages: Updating one of the functions/procedures will invalid other objects which use
different functions/procedures since whole package is needed to be compiled.
we
cant
pass
parameters
to
packages
137. What is a NOCOPY parameter? Where it is used?
NOCOPY
Parameter
Option
Prior to Oracle 8i there were three types of parameter-passing options to procedures and functions:
With the new NOCOPY option, OUT and IN OUT parameters are passed by reference, which avoids
copy overhead. However, parameter set copy is not created and, in case of an exception rollback,
cannot be performed and the original values of parameters cannot be restored.
Here is an example of using the NOCOPY parameter option:
TYPE
Note
IS
RECORD(
Title
VARCHAR2(15),
Created_By
VARCHAR2(20),
Created_When DATE, Memo VARCHAR2(2000));TYPE Notebook IS
VARRAY(2000) OF Note;CREATE OR REPLACE PROCEDURE Update_Notes(Customer_Notes
IN OUT NOCOPY Notebook) ISBEGIN
...END;
NOCOPY is a hint given to the compiler, indicating that the parameter is passed as a reference and
hence actual value should not be copied in to the block and vice versa. The processing will be done
accessing data from the original variable. (Which other wise, oracle copies the data from the
parameter variable into the block and then copies it back to the variable after processing. This would
put extra burdon on the server if the parameters are of large collections/sizes)
For better understanding of NOCOPY parameter, I will suggest u to run the following code and see
the
result.
DECLARE
n
NUMBER
:=
10;
PROCEDURE
do_something
(
n1
IN
NUMBER,
n2
IN
OUT
NUMBER,
n3
IN
OUT
NOCOPY
NUMBER)
IS
BEGIN
n2
:=
20;
DBMS_OUTPUT.PUT_LINE(n1);
-prints
10
n3
:=
30;
DBMS_OUTPUT.PUT_LINE(n1);
-prints
30
END;
BEGIN
do_something(n,
n,
n);
DBMS_OUTPUT.PUT_LINE(n);
-prints
20
END;
138. How
to
get
the
25th
row
of
a
table.
select * from Emp where rownum < 26
minus
142. How
can
i
see
the
time
of
execution
of
a
sql
statement?
sqlplus
>set
time
on
144. what happens when commit is given in executable section and an error occurs ?please tell me what ha
Whenever the exception is raised ..all the transaction made before will be commited. If the exception
is
not
raised
then
all
the
transaction
will
be
rolled
back.
145. Wheather
a
Cursor
is
a
Pointer
or
Reference?
cursor is basically a pointer as it's like a address of virtual memory which is being used storage
related to sql query & is made free after the values from this memory is being used
146. What will happen to an anonymus block,if there is no statement inside the block?eg:-declarebeginend
We
cant
have
declare
begin
end
we
must
have
something
between
the
begin
and
the
end
keywords
otherwise
a
compilation
error
will
be
raised.
147.Can
we
have
eg:
after
same
trigger
with
different
names
for
table?
create
insert
trigger
on
trig1
tab1;
create
insert
trigger
on
trig2
tab1;
and
eg:
after
If
yes,which
trigger
executes
first.
The triggers will be fired on the basis of TimeStamp of their creation in Data Dictionary. The trigger
with
latest
timestamp
will
be
fired
at
last.
148.creating a table, what is the difference between VARCHAR2(80) and VARCHAR2(80 BYTE)?
Historically database columns which hold alphanumeric data have been defined using the number of
bytes they store. This approach was fine as the number of bytes equated to the number of
characters when using single-byte character sets. With the increasing use of multibyte character
sets to support globalized databases comes the problem of bytes no longer equating to
characters.Suppose we had a requirement for a table with an id and description column, where the
description must hold up to a maximum of 20 characters.We then decide to make a multilingual
version of our application and use the same table definition in a new instance with a multibyte
character set. Everything works fine until we try of fill the column with 20 two-byte characters. All of a
sudden the column is trying to store twice the data it was before and we have a problem.Oracle9i
has solved this problem with the introduction of character and byte length semantics. When defining
an alphanumeric column it is now possible to specify the length in 3 different ways: 1.
VARCHAR2(20) 2. VARCHAR2(20 BYTE) 3. VARCHAR2(20 CHAR)Option 1 uses the default
length semantics defined by the NLS_LENGTH_SEMANTICS parameter which defaults to BYTE.
Option 2 allows only the specified number of bytes to be stored in the column, regardless of how
many characters this represents. Option 3 allows the specified number of characters to be stored in
the
column
regardless
of
the
number
of
bytes
this
equates
to.
151. how
to
insert
a
music
file
into
the
database
LOB datatypes can be used to store blocks of unstructured data like graphic images, video, audio,
etc
152. what
is
diff
between
strong
and
weak
ref
cursors
A strong REF CURSOR type definition specifies a return type, but a weak definition does not.
DECLARE
TYPE EmpCurTyp IS REF CURSOR RETURN emp%ROWTYPE;
-- strong
TYPE
GenericCurTyp
IS
REF
CURSOR;
-weak
in a strong cursor structure is predetermined --so we cannot query having different structure other
than
emp%rowtype
in weak cursor structure is not predetermined -- so we can query with any structure
Strong Ref cursor type is less Error prone, because oracle already knows what type you are going to
return
as
compare
to
weak
ref
type.
154. Explain, Is it possible to have same name for package and the procedure in that package.
Yes, its possible to have same name for package and the procedure in that package.
159. Without closing the cursor, If you want to open it what will happen. If error, get what is the erro r
If you reopen a cursor without closing it first,PL/SQL raises the predefined exception
CURSOR_ALREADY_OPEN.
161. What
is
PRAGMA
RESTRICT_REFERENCES:
By using pragma_restrict_references we can give the different status to functions,Like
WNDB(WRITE NO DATA BASE),RNDB(read no data base),Write no package state,read no packge
state.W
164. What
is
difference
between
PL/SQL
tables
and
arrays?
array is set of values of same datatype.. where as tables can store values of diff datatypes.. also
tables
has
no
upper
limit
where
as
arrays
has.
168. How
do
you
set
table
for
read
only
access
?
If you update or delete the records in the table, at the same time, no body can update or delete the
same records which you updated or deleted because oracle lock the data which u updated or
deleted.
Select
for
update
variables
directly
or
reference
host
Packages:
indirectly..
variable..
Trigger:
This query will return the first row for each unique id in the table. This query could be used as part of
a
delete
statement
to
remove
duplicates
if
needed.
SELECT ID
FROM func t1
WHERE ROWID = (SELECT MIN (ROWID)
FROM func
Also:
You
SELECT ID
FROM func t1
can
use
group
WHERE ID = t1.ID)
by
without
summary
GROUP BY id
173. Why we use instead of trigger. what is the basic structure of the instead of trigger. Explain speci
function
Conceptually, INSTEAD OF triggers are very simple. You write code that the Oracle server will execute
when a program performs a DML operation on the view. Unlike a conventional BEFORE or AFTER
trigger, an INSTEAD OF trigger takes the place of, rather than supplements, Oracle's usual DML
behavior. (And in case you're wondering, you cannot use BEFORE/AFTER triggers on any type of view,
even
if
you
have
defined
an
INSTEAD
OF
trigger
on
the
view.)
CREATE
OR
REPLACE
TRIGGER
images_v_insert
INSTEAD
OF
INSERT
ON
images_v
FOR
EACH
ROW
BEGIN
/*
This
will
fail
with
DUP_VAL_ON_INDEX
if
the
images
table
||
already
contains
a
record
with
the
new
image_id.
*/
INSERT
INTO
images
VALUES
(:NEW.image_id,
:NEW.file_name,
:NEW.file_type,
:NEW.bytes);
IF
/*
||
:NEW.keywords
Note:
apparent
The
workaround
||
variable
keywords_holder
FOR
IS
bug
prevents
is
to
store
(in
this
Keyword_tab_t
the_keyword
INSERT
VALUES
(:NEW.image_id,
END
END
NOT
NULL
THEN
DECLARE
use
of
:NEW.keywords.LAST.
:NEW.keywords
as
a
local
case
keywords_holder.)
*/
:=
:NEW.keywords;
BEGIN
IN
1..keywords_holder.LAST
LOOP
INTO
keywords
keywords_holder(the_keyword));
LOOP;
END;
IF;
END;
Once we've created this INSTEAD OF trigger, we can insert a record into this object view (and hence
into bothunderlying tables) quite easily using:
INSERT
824,
INTO
images_v
VALUES
(Image_t(41265,
'pigpic.jpg',
Keyword_tab_t('PIG', 'BOVINE', 'FARM ANIMAL')));
'JPG',
This statement causes the INSTEAD OF trigger to fire, and as long as the primary key value (image_id
= 41265) does not already exist, the trigger will insert the data into the appropriate tables.
Similarly, we can write additional triggers that handle updates and deletes. These triggers use the
predictable clauses INSTEAD OF UPDATE and INSTEAD OF DELETE.
180. what
is
the
difference
between
database
trigger
and
application
trigger?
Database triggers are backend triggeres and perform as any event occurs on databse level (ex.
Inset,update,Delete e.t.c) wheras application triggers are froentend triggers and perform as
any event taken on application level (Ex. Button Pressed, New Form Instance e.t.c)
185. Compare
EXISTS
and
IN
Usage
with
advantages
and
disadvantages.
exist
is
faster
than
IN
Command
exist
do
full
table
scan...so
it
is
faster
than
IN
Use Exists whenever possible. EXISTS only checks the existence of records (True/False), and in
the case of IN each and every records will be checked. performace wise EXISTS is better.
189. Which
type
of
binding
does
PL/SQL
use?
it
uses
latebinding
so
only
we
cannot
use
ddl
staments
directly
it
uses
dynamicbinding
191. Why
DUAL
Because its a dummy table.
table
is
not
visible?
Add a comment
23.
DEC
19
How do you identify existing rows of data in the target table using
lookup transformation?
Scenario:
How do you identify existing rows of data in the target table using lookup transformation?
Solution:
There are two ways to lookup the target table to verify a row exists or not:
1. Use connect dynamic cache lookup and then check the values of NewLookuprow Output port to
decide whether the incoming record already exists in the table / cache or not.
2. Use Unconnected lookup and call it from an expression transformation and check the Lookup
condition port value (Null/ Not Null) to decide whether the incoming record already exists in the
table or not.
Posted 19th December 2011 by Prafull Dangore
0
Add a comment
24.
25.
DEC
19
Solution:
While running a Workflow, the PowerCenter Server uses the Load Manager process and the
Data Transformation Manager Process (DTM) to run the workflow and carry out workflow tasks.
When the PowerCenter Server runs a workflow,
The Load Manager performs the following tasks:
1. Locks the workflow and reads workflow properties.
2. Reads the parameter file and expands workflow variables.
3. Creates the workflow log file.
4. Runs workflow tasks.
5. Distributes sessions to worker servers.
6. Starts the DTM to run sessions.
7. Runs sessions from master servers.
8. Sends post-session email if the DTM terminates abnormally.
When the PowerCenter Server runs a session, the DTM performs the following tasks:
1. Fetches session and mapping metadata from the repository.
2. Creates and expands session variables.
3. Creates the session log file.
4. Validates session code pages if data code page validation is enabled. Checks query
Conversions if data code page validation is disabled.
5. Verifies connection object permissions.
6. Runs pre-session shell commands.
7. Runs pre-session stored procedures and SQL.
8. Creates and runs mapping, reader, writer, and transformation threads to extract, transformation,
and load data.
9. Runs post-session stored procedures and SQL.
10. Runs post-session shell commands.
11. Sends post-session email.
Add a comment
26.
DEC
19
Solution:
1. When our data comes through Update Strategy transformation or in other words after Update
strategy we cannot add joiner transformation
2. We cannot connect a Sequence Generator transformation directly before the Joiner
transformation.
The Joiner transformation does not match null values. For example if both EMP_ID1 and EMP_ID2
from the example above contain a row with a null value the PowerCenter Server does not consider
them a match and does not join the two rows. To join rows with null values you can replace null
input with default values and then join on the default values.
Add a comment
27.
DEC
19
If you use a Filter transformation in the mapping place the transformation before the Aggregator
transformation to reduce unnecessary aggregation.
Add a comment
28.
29.
DEC
19
How can you recognize whether or not the newly added rows in the
source are gets insert in the target?
Scenario:
How can you recognize whether or not the newly added rows in the source are gets insert in the
target?
Solution:
In the Type2 mapping we have three options to recognize the newly added rows
Version number
Flagvalue
Effective date Range
If it is Type 2 Dimension the above answer is fine but if u want to get the info of all the insert
statements and Updates you need to use session log file where you configure it to verbose.
You will get complete set of data which record was inserted and which was not.
Add a comment
30.
DEC
19
Add a comment
31.
DEC
19
Add a comment
32.
33.
DEC
19
Add a comment
34.
DEC
19
Add a comment
35.
DEC
19
What is update strategy and what are the options for update strategy?
Scenario:
What is update strategy and what are the options for update strategy?
Solution:
Informatica processes the source data row-by-row. By default every row is marked to be
inserted in the target table. If the row has to be updated/inserted based on some logic Update
Strategy transformation is used. The condition can be specified in Update Strategy to mark the
processed row for update or insert.
Add a comment
36.
37.
DEC
19
Add a comment
38.
DEC
19
Add a comment
39.
DEC
19
Solution:
Shortcuts?
We can create shortcuts to objects in shared folders. Shortcuts provide the easiest way to reuse
objects. We use a shortcut as if it were the actual object, and when we make a change to the original
object, all shortcuts inherit the change.
Shortcuts to folders in the same repository are known as local shortcuts. Shortcuts to the global
repository are called global shortcuts.
We use the Designer to create shortcuts.
Sessions and Batches
Sessions and batches store information about how and when the Informatica Server moves data
through mappings. You create a session for each mapping you want to run. You can group several
sessions together in a batch. Use the Server Manager to create sessions and batches.
Mapplets
You can design a mapplet to contain sets of transformation logic to be reused in multiple mappings
within a folder, a repository, or a domain. Rather than recreate the same set of transformations
each time, you can create a mapplet containing the transformations, then add instances of the
mapplet to individual mappings. Use the Mapplet Designer tool in the Designer to create mapplets.
Mappings
A mapping specifies how to move and transform data from sources to targets. Mappings include
source and target definitions and transformations. Transformations describe how the Informatica
Server transforms data. Mappings can also include shortcuts, reusable transformations, and
mapplets. Use the Mapping Designer tool in the Designer to create mappings.
session
A session is a set of instructions to move data from sources to targets.
workflow
A workflow is a set of instructions that tells the Informatica server how to execute the tasks.
Worklet
Worklet is an object that represents a set of tasks.
Add a comment
40.
41.
DEC
19
The PowerCenter domain is the fundamental administrative unit in PowerCenter. The domain supports the
administration of the distributed services. A domain is a collection of nodes and services that you can group
in folders based on administration ownership.
A node is the logical representation of a machine in a domain. One node in the domain acts as a gateway to
receive service requests from clients and route them to the appropriate service and node. Services and
processes run on nodes in a domain. The availability of a service or process on a node depends on how you
configure the service and the node.
Services for the domain include the Service Manager and a set of application services:
Service Manager. A service that manages all domain operations. It runs the application services and performs
domain functions on each node in the domain. Some domain functions include authentication, authorization,
and logging. For more information about the Service Manager, see Service Manager.
Application services. Services that represent PowerCenter server-based functionality, such as the Repository
Service and the Integration Service. The application services that runs on a node depend on the way you
configure the services.
Add a comment
42.
DEC
16
Solution:
A Star Schema is composed of 2 kinds of tables, one Fact Table and multiple Dimension Tables.
It is called a star schema because the entity-relationship diagram between dimensions
and fact tables resembles a star where one fact table is connected to multiple dimensions. The
center of the star schema consists of a large fact table and it points towards the dimension tables.
The advantage of star schema are slicing down, performance increase and easy understanding of
data.
F1act Table contains the actual transactions or values that are being analyzed.
Dimension Tables contain descriptive information about those transactions or values.
In Star Schemas, Dimension Tables are denormalized tables and Fact Tables are highly
normalized.
Star Schema
Star Schema is preferable because less number of joins will result in performance.
Because Dimension Tables are denormalized, there will be no need to go for joins all the time.
Steps in designing Star Schema
Identify a business process for analysis(like sales).
Identify measures or facts (sales dollar).
Identify dimensions for facts(product dimension, location dimension, time dimension, organization
dimension).
List the columns that describe each dimension.(region name, branch name, region name).
Determine the lowest level of summary in a fact table(sales dollar).
Important aspects of Star Schema & Snow Flake Schema
In a star schema every dimension will have a primary key.
In a star schema, a dimension table will not have any parent table.
Whereas in a snow flake schema, a dimension table will have one or more parent tables.
Hierarchies for the dimensions are stored in the dimensional table itself in star schema.
Whereas hierachies are broken into separate tables in snow flake schema. These hierachies helps to
drill down the data from topmost hierachies to the lowermost hierarchies.
Add a comment
43.
DEC
16
Add a comment
2.
3.
DEC
16
2.
a.
b.
c.
d.
3.
a.
b.
c.
d.
The ____ manages the connections to the repository from the Informatica client application
Repository Server
Informatica Server
Informatica Repository Manager
Both a & b
4.
a.
b.
c.
During development phase, its best to use what type of tracing levels to debug errors
Terse tracing
Verbose tracing
Verbose data tracing
d.
Normal tracing
5.
a.
b.
c.
d.
6.
a.
b.
c.
d.
There is a requirement to concatenate the first name and last name from a flat file and use this
concatenated value at 2 locations in the target table. The best way to achieve this functionality is by
using the
Expression transformation
Filter transformation
Aggregator transformation
Using the character transformation
7.
a.
b.
The workflow monitor does not allow the user to edit workflows.
True
False
8.
There is a requirement to increment a batch number by one for every 5000 records that are
loaded. The best way to achieve this is:
Use Mapping parameter in the session
Use Mapping variable in the session
Store the batch information in the workflow manager
Write code in a transformation to update values as required
1.
2.
3.
4.
9.
a.
b.
c.
d.
There is a requirement to reuse some complex logic across 3 mappings. The best way to achieve
this is:
Create a mapplet to encapsulate the reusable functionality and call this in the 3 mappings
Create a worklet and reuse this at the session level during execution of the mapping
Cut and paste the code across the 3 mappings
Keep this functionality as a separate mapping and call this mapping in 3 different mappings this
would make the code modular and reusable
10. You imported a delimited flat file ABC.TXT from you workstation into the Source qualifier in
Informatica client. You then proceeded with developing a mapping and validated it for correctness
using the Validate function. You then set it up for execution in the workflow manager. When you
execute the mapping, you get an error stating that the file was not found. The most probable cause
of this error is:
a. Your mapping is not correct and the file is not being parsed correctly by the source qualifier
b. The file cannot be loaded from your workstation, it has to be on the server
c. Informatica did not have access to the NT directory on your workstation where the file is stored
d. You forgot to mention the location of the file in the workflow properties and hence the error
11.
a.
b.
c.
Various administrative functions such as folder creation and user access control are done using:
Informatica Administration console
Repository Manager
Informatica Server
d.
Repository Server
12. You created a mapping a few months back which is not invalid because the database schema
underwent updates in the form of new column extensions. In order to fix the problem, you would:
a. Re-import the table definitions from the database
b. Make the updates to the table structure manually in the mapping
c. Informatica detects updates to table structures automatically. All you have to do is click on
Validate option for the mapping
d. None of the above. The mapping has to be scrapped and a new one needs to be created
13.
a.
b.
c.
14.
a.
b.
c.
d.
15.
a.
b.
c.
d.
When using the debugger function, you can stop execution at the following:
Errors or breakpoints
Errors only
Breakpoints only
First breakpoint after the error occurs
16. There is a requirement to selectively update or insert values in the target table based on the value
of a field in the source table. This can be achieved using:
a. Update Strategy transformation
b. Aggregator transformation
c. Router transformation
d. Use the Expression transformation to write code for this logic
17. A mapping can contain more than one source qualifier one for each source that is imported.
a. True
b. False
18.
a.
b.
c.
d.
19.
a.
b.
c.
d.
To create a valid mapping in Informatica, at a minimum, the following entities are required:
Source, Source Qualifier, Transformation, Target
Source Qualifier, Transformation, Target
Source and Target
Source, Transformation, Target
20. When one imports a relational database table using the Source Analyzer, it always creates the
following in the mapping:
a. An instance of the table with a source qualifier with a one to one mapping for each field
b. Source sorter with one to one mapping for each field
c. None of the above
Name:
Score:
Pass / Fail:
Ans:
1. a 2. b 3. a 4. c 5. a 6. a 7. a 8. b 9. a 10. b 11. b 12. a,b 13. a 14. c 15. a 16. a 17. b8. a 19. a 20. a
Posted 16th December 2011 by Prafull Dangore
0
Add a comment
4.
DEC
16
The database connection session parameters can be created for all input fields to connection
objects. For example, username, password, etc.
It is possible to have multiple parameters at a time?
The order of execution is wf/s/m.
Add a comment
5.
DEC
16
Parameter file it will supply the values to session level variables and mapping level variables.
Variables are of two types:
Session level variables
Mapping level variables
Add a comment
6.
7.
DEC
16
What is Worklet?
Scenario:
What is Worklet?
Solution:
Worklet is a set of reusable sessions. We cannot run the worklet without workflow.
If we want to run 2 workflow one after another.
1. If both workflow exists in same folder we can create 2 worklet rather than creating 2 workfolws.
5. We can set the dependency between these two workflow using shell script is one approach.
The other approach is event wait and event rise.
Add a comment
8.
DEC
16
Add a comment
9.
DEC
16
Lookup
In lookup it will return either first record or last
record or any value or error value.
Where as in lookup we can configure to use
persistence cache, shared cache, uncached and
dynamic cache.
Add a comment
10.
11.
DEC
15
and
non-shared
folders.
separately
for
each
client
machine.
2. Transformation ports
A. Know
the
rules
for
linking
transformation
ports
B. Know
the
rules
for
using
and
converting
the
PowerCenter
C. Know what types of transformation ports are supported and the uses
D. Be familiar with the types of data operations that can be performed at the port level.
together.
datatypes.
for each.
A. Be
familiar
with
all
transformation
language
functions
and
B. Know
how
the
Integration
Service
evaluates
C. Be able to predict the output or result of a given expression.
6. Source Qualifier transformation
key
words.
expressions.
A. Understand
how
the
Source
Qualifier
transformation
handles
datatypes.
B. Know
how
the default
query
is
generated
and
the rules
for modifying
it.
C. Understand how to use the Source Qualifier transformation to perform various types of
joins.
7. Aggregator transformation
A. Know
how
to
use
PowerCenter
aggregate
B. Understand how to be able to use a variable port in an Aggregator
C. Be
able
to
predict
the
output
of
a
given
Aggregator
D. Know the rules associated with defining and using aggregate caches.
functions.
transformation.
transformation.
regarding
and
output
passive
mapplets.
groups.
mapplets.
B. Be familiar with the properties of the Repository Service and the Integration Service.
C. Know the meaning of the terms used to describe development and maintenance
operations.
D. Know
how
to
work
with
repository
E. Understand
the
relationships
between
all
PowerCenter
object
F. Know which tools are used to create and modify all objects.
variables.
types.
3. Repository Service
A. Know how each client and service component communicates with relational databases.
B. Be familiar with the connectivity options that are available for the different tools.
C. Understand how the client and service tools access flat files, COBOL files, and XML
Files.
D. Know the requirements for using various types of ODBC drivers with the client tools.
E. Know
the
meaning
of
all
database
connection
properties.
F. Be familiar with the sequence of events involving starting the Repository Service.
G. Know
which
repository
operations
can
be performed
from
the command
line.
H. Know how local and global repositories interact.
4. Installation
A. Understand
the
basic
procedure
for
installing
the
client
and
service
software.
B. Know
what
non-Informatica
hardware
and
software
is
required
for
installation.
C. Be
familiar
with
network
related
requirements
and
limitations.
D. Know
which
components
are
needed
to
perform
a
repository
upgrade.
E. Be familiar with the data movement mode options.
5. Security
A. Be
familiar
with
the
security
permissions
for
application
users.
B. Be familiar with the meaning of the various user types for an Informatica system.
C. Know
the
basic
steps
for
creating
and
configuring
application
users.
D. Understand
how
user
security
affects
folder
operations.
E. Know which passwords and other key information are needed to install and connect new
client software to a service environment.
6. Object sharing
A. Understand
the
differences
between
copies
and
shortcuts.
B. Know
which
object
properties
are
inherited
in
shortcuts.
C. Know the rules associated with transferring and sharing objects between folders.
D. Know the rules associated with transferring and sharing objects between repositories.
7. Repository organization and migration
A. Understand
the
various
options
for
organizing
a
repository.
B. Be familiar with how a repository stores information about its own properties.
C. Be
familiar
with
metadata
extensions.
D. Know
the
capabilities
and
limitations
of
folders
and
other
repository
objects.
E. Know what type of information is stored in the repository.
8. Database connections
A. Understand the purpose and relationships between the different types of code pages.
B. Know the differences between using native and ODBC database connections in the
Integration Service.
C. Understand how and why the client tools use database connectivity.
D. Know the differences between client and service connectivity.
to
to
abort
work
or
with
stop
a
workflow
and
workflow
session
or
log
task.
files.
C. Understand
how
to
work
with
reject
D. Know how to use the Workflow Monitor to quickly determine the status of any workflow or task
files.
debug
session.
breakpoints.
5. User-defined functions
A. Know how to create user-defined functions.
B. Know the scope of user-defined functions.
C.Know how to use and manage user-defined functions.
D. Understand the different properties for user-defined functions.
E. Know how to create expressions with user-defined functions.
6. Normalizer transformation
A. Be
familiar
with
the
possible
uses
of
the
Normalizer
B. Understand
how
to
read
a
COBOL
data
source
in
C. Be familiar with the rules regarding reusable Normalizer transformations.
D.Know how the OCCURS and REDEFINES COBOL keywords affect the
Normalizer transformation.
transformation.
a
mapping.
Add a comment
12.
DEC
15
performing
Drop
Increase
the
following
indexes
Increase
Use
Use
database
Optimize
tasks
to
and
checkpoint
bulk
external
network
target
increase
key
packet
Bottlenecks
to a target
target. If the
you have
bottleneck.
performance:
constraints.
intervals.
loading.
loading.
size.
databases.
Identifying
Source
Bottlenecks
-----------------------------If the session reads from relational source, you can use a filter transformation, a read test mapping, or a
database
query
to
identify
source
bottlenecks:
* Filter Transformation - measure the time taken to process a given amount of data, then add an always
false filter transformation in the mapping after each source qualifier so that no data is processed past the
filter transformation. You have a source bottleneck if the new session runs in about the same time.
* Read Test Session - compare the time taken to process a given set of data using the session with that
for a session based on a copy of the mapping with all transformations after the source qualifier removed
with the source qualifiers connected to file targets. You have a source bottleneck if the new session runs
in
about
the
same
time.
* Extract the query from the session log and run it in a query tool. Measure the time taken to return the
first row and the time to return all rows. If there is a significant difference in time, you can use an optimizer
hint
to
eliminate
the
source
bottleneck
Consider
performing
the
following
tasks
to
increase
*
Optimize
the
*
Use
conditional
*
Increase
database
network
packet
*
Connect
to
Oracle
databases
using
IPC
performance:
query.
filters.
size.
protocol.
Identifying
Mapping
Bottlenecks
------------------------------If you determine that you do not have a source bottleneck, add an Always False filter transformation in
the mapping before each target definition so that no data is loaded into the target tables. If the time it
takes to run the new session is the same as the original session, you have a mapping bottleneck.
You
can
also
identify
mapping
bottlenecks
by
examining
performance
counters.
lookups
Cache
lookups
if
o the number of rows in the lookup table is significantly less than the typical number of source rows
o un-cached lookups perform poorly (e.g. they are based on a complex view or an unindexed table)
Optimize
Cached
lookups
o
Use
a
persistent
cache
if
the
lookup
data
is
static
o
Share
caches
if
several
lookups
are
based
on
the
same
data
set
o Reduce the number of cached rows using a SQL override with a restriction
o
Index
the
columns
in
the
lookup
ORDER
BY
o Reduce the number of co
Add a comment
13.
DEC
15
Informatica OPB table which have gives source table and the
mappings and folders using an sql query
Scenario:
Informatica OPB table which have gives source table and the mappings and folders using an sql
query
Solution:
-
SQL query
select OPB_SUBJECT.SUBJ_NAME,
OPB_MAPPING.MAPPING_NAME,
OPB_SRC.source_name
from opb_mapping, opb_subject, opb_src, opb_widget_inst
where opb_subject.SUBJ_ID = opb_mapping.SUBJECT_ID
and OPB_MAPPING.MAPPING_ID = OPB_WIDGET_INST.MAPPING_ID
and OPB_WIDGET_Inst.WIDGET_ID = OPB_SRC.SRC_ID
and OPB_widget_inst.widget_type=1;
Add a comment
14.
15.
DEC
15
The process of pushing transformation logic to the source or target database by Informatica Integration
service is known as Pushdown Optimization. When a session is configured to run for Pushdown Optimization,
the Integration Service translates the transformation logic into SQL queries and sends the SQL queries to the
database. The Source or Target Database executes the SQL queries to process the transformations.
There is no memory or disk space required to manage the cache in the Informatica server for Aggregator,
Lookup, Sorter and Joiner Transformation, as the transformation logic is pushed to database.
SQL Generated by Informatica Integration service can be viewed before running the session through
Optimizer viewer, making easier to debug.
When inserting into Targets, Integration Service do row by row processing using bind variable (only soft
parse only processing time, no parsing time). But In case of Pushdown Optimization, the statement will be
executed once.
There are cases where the Integration Service and Pushdown Optimization can produce different result sets
for the same transformation logic. This can happen during data type conversion, handling null values, case
sensitivity, sequence generation, and sorting of data.
The database and Integration Service produce different output when the following settings and conversions
are different:
Nulls treated as the highest or lowest value: While sorting the data, the Integration Service can
treat null values as lowest, but database treats null values as the highest value in the sort order.
SYSDATE built-in variable: Built-in Variable SYSDATE in the Integration Service returns the current
date and time for the node running the service process. However, in the database, the SYSDATE returns the
current date and time for the machine hosting the database. If the time zone of the machine hosting the
database is not the same as the time zone of the machine running the Integration Service process, the results
can vary.
Date Conversion: The Integration Service converts all dates before pushing transformations to the
database and if the format is not supported by the database, the session fails.
Logging: When the Integration Service pushes transformation logic to the database, it cannot trace all the
events that occur inside the database server. The statistics the Integration Service can trace depend on the
type of pushdown optimization. When the Integration Service runs a session configured for full pushdown
optimization and an error occurs, the database handles the errors. When the database handles errors, the
Integration Service does not write reject rows to the reject file.
Add a comment
16.
DEC
14
What are the components of Informatica? And what is the purpose of each?
Ans: Informatica Designer, Server Manager & Repository Manager. Designer for Creating Source & Target
definitions, Creating Mapplets and Mappings etc. Server Manager for creating sessions & batches, Scheduling the
sessions & batches, Monitoring the triggered sessions and batches, giving post and pre session commands, creating
database connections to various instances etc. Repository Manage for Creating and Adding repositories, Creating &
editing folders within a repository, Establishing users, groups, privileges & folder permissions, Copy, delete, backup a
repository, Viewing the history of sessions, Viewing the locks on various objects and removing those locks etc.
2.
Ans: Its a location where all the mappings and sessions related information is stored. Basically its a database where
the metadata resides. We can add a repository through the Repository manager.
3.
Name at least 5 different types of transformations used in mapping design and state the use of each.
Ans: Source Qualifier Source Qualifier represents all data queries from the source, Expression Expression
performs simple calculations,
Filter Filter serves as a conditional filter,
Lookup Lookup looks up values and passes to other objects,
Aggregator - Aggregator performs aggregate calculations.
4.
5.
How are the sources and targets definitions imported in informatica designer? How to create Target
definition for flat files?
Ans: When you are in source analyzer there is a option in main menu to Import the source from Database, Flat File,
Cobol File & XML file, by selecting any one of them you can import a source definition. When you are in Warehouse
Designer there is an option in main menu to import the target from Database, XML from File and XML from sources
you can select any one of these.
There is no way to import target definition as file in Informatica designer. So while creating the target definition for a
file in the warehouse designer it is created considering it as a table, and then in the session properties of that
mapping it is specified as file.
6.
7.
8.
9.
Ans: The source flat files can be kept in some folder on the Informatica server or any other machine, which is in its
domain.
13. What are the oracle DML commands possible through an update strategy?
Ans: dd_insert, dd_update, dd_delete & dd_reject.
14. How to update or delete the rows in a target, which do not have key fields?
Ans: To Update a table that does not have any Keys we can do a SQL Override of the Target Transformation by
specifying the WHERE conditions explicitly. Delete cannot be done this way. In this case you have to specifically
mention the Key for Target table definition on the Target transformation in the Warehouse Designer and delete the
row using the Update Strategy transformation.
15. What is option by which we can run all the sessions in a batch simultaneously?
Ans: In the batch edit box there is an option called concurrent. By checking that all the sessions in that Batch will run
concurrently.
16. Informatica settings are available in which file?
Ans: Informatica settings are available in a file pmdesign.ini in Windows folder.
17. How can we join the records from two heterogeneous sources in a mapping?
Ans: By using a joiner.
18. Difference between Connected & Unconnected look-up.
Ans: An unconnected Lookup transformation exists separate from the pipeline in the mapping. You write an
expression using the :LKP reference qualifier to call the lookup within another transformation. W hile the connected
lookup forms a part of the whole flow of mapping.
19. Difference between Lookup Transformation & Unconnected Stored Procedure Transformation Which one
is faster ?
Ans: There is an option to run the stored procedure before starting to load the rows.
Views contains query whenever execute views it has read from base table
Where as M views loading or replicated takes place only once, which gives you better query performance
Refresh m views 1.on commit and 2. on demand
(Complete, never, fast, force)
2.What is bitmap index why its used for DWH?
A bitmap for each key value replaces a list of rowids. Bitmap index more efficient for data warehousing because low
cardinality, low updates, very efficient for where class
3.What is star schema? And what is snowflake schema?
The center of the star consists of a large fact table and the points of the star are the dimension tables.
Snowflake schemas normalized dimension tables to eliminate redundancy. That is, the
Dimension data has been grouped into multiple tables instead of one large table.
Star schema contains demoralized dimension tables and fact table, each primary key values in dimension table
associated with foreign key of fact tables.
Here a fact table contains all business measures (normally numeric data) and foreign key values, and dimension
tables has details about the subject area.
Snowflake schema basically a normalized dimension tables to reduce redundancy in the dimension tables
Staging area needs to clean operational data before loading into data warehouse.
Cleaning in the sense your merging data which comes from different source
create os service and create init file and start data base no mount stage then give create data base command.
OLTP system is basically application orientation (eg, purchase order it is functionality of an application)
Where as in DWH concern is subject orient (subject in the sense custorer, product, item, time)
OLTP
Application Oriented
Used to run business
Detailed data
Current up to date
Isolated Data
Repetitive access
Clerical User
Performance Sensitive
Few Records accessed at a time (tens)
Read/Update Access
No data redundancy
Database Size 100MB-100 GB
DWH
Subject Oriented
Used to analyze business
Summarized and refined
Snapshot data
Integrated Data
Ad-hoc access
Knowledge User
Performance relaxed
Large volumes accessed at a time(millions)
Mostly Read (Batch Update)
Redundancy present
Database Size 100 GB - few terabytes
A single, complete and consistent store of data obtained from a variety of different sources made available to end
users in a what they can understand and use in a business context.
A process of transforming data into information and making it available to users in a timely enough manner to make a
difference Information
Technique for assembling and managing data from various sources for the purpose of answering business questions.
Thus making decisions that were not previous possible
A data mart designed for a particular line of business, such as sales, marketing, or finance.
Where as data warehouse is enterprise-wide/organizational
The data flow of data warehouse depending on the approach
10.What is slowly changing dimension. What kind of scd used in your project?
Dimension attribute values may change constantly over the time. (Say for example customer dimension has
customer_id,name, and address) customer address may change over time.
How will you handle this situation?
There are 3 types, one is we can overwrite the existing record, second one is create additional new record at the time
of change with the new attribute values.
Third one is create new field to keep new values in the original dimension table.
12.What are the types of index? And is the type of index used in your project?
Bitmap index, B-tree index, Function based index, reverse key and composite index.
We used Bitmap index in our project for better performance.
13.How is your DWH data modeling(Details about star schema)?
14.A table have 3 partitions but I want to update in 3rd partitions how will you do?
Specify partition name in the update statement. Say for example
Update employee partition(name) a, set a.empno=10 where ename=Ashok
15.When you give an update statement how memory flow will happen and how oracles allocate memory for
that?
Oracle first checks in Shared sql area whether same Sql statement is available if it is there it uses. Otherwise allocate
memory in shared sql area and then create run time memory in Private sql area to create parse tree and execution
plan. Once it completed stored in the shared sql area wherein previously allocated memory
16.Write a query to find out 5th max salary? In Oracle, DB2, SQL Server
Select (list the columns you want) from (select salary from employee order by salary)
Where rownum<5
17.When you give an update statement how undo/rollback segment will work/what are the steps?
Oracle keep old values in undo segment and new values in redo entries. When you say rollback it replace old values
from undo segment. When you say commit erase the undo segment values and keep new vales in permanent.
Informatica Administration
22.What is a folder?
Folder contains repository objects such as sources, targets, mappings, transformation which are helps logically
organize our data warehouse.
Not possible
24.What are shortcuts? Where it can be used? What are the advantages?
There are 2 shortcuts(Local and global) Local used in local repository and global used in global repository. The
advantage is reuse an object without creating multiple objects. Say for example a source definition want to use in 10
mappings in 10 different folder without creating 10 multiple source you create 10 shotcuts.
Use single pass read(use one source qualifier instead of multiple SQ for same table)
Minimize data type conversion (Integer to Decimal again back to Integer)
Optimize transformation(when you use Lookup, aggregator, filter, rank and joiner)
Use caches for lookup
Aggregator use presorted port, increase cache size, minimize input/out port as much as possible
Use Filter wherever possible to avoid unnecessary data flow
26.Explain Informatica Architecture?
Informatica consist of client and server. Client tools such as Repository manager, Designer, Server manager.
Repository data base contains metadata it read by informatica server used read data from source, transforming and
loading into target.
27.How will you do sessions partitions?
Transformation
31.What are the port available for update strategy , sequence generator, Lookup, stored procedure transformation?
Transformations
Update strategy
Sequence Generator
Lookup
Stored Procedure
Port
Input, Output
Output only
Input, Output, Lookup, Return
Input, Output
32.Why did you used connected stored procedure why dont use unconnected stored procedure?
33.What is active and passive transformations?
Active transformation change the no. of records when passing to targe(example filter)
where as passive transformation will not change the transformation(example expression)
34.What are the tracing level?
Normal It contains only session initialization details and transformation details no. records rejected, applied
Terse - Only initialization details will be there
Verbose Initialization Normal setting information plus detailed information about the transformation.
Verbose data Verbose init. Settings and all information about the session
Copy all the mapping from development repository and paste production repository while paste it will promt whether
you want replace/rename. If say replace informatica replace all the source tables with repository database.
38.What is difference between aggregator and expression?
Aggregator is active transformation and expression is passive transformation
Aggregator transformation used to perform aggregate calculation on group of records really
Where as expression used perform calculation with single record
39.Can you use mapping without source qualifier?
Not possible, If source RDBMS/DBMS/Flat file use SQ or use normalizer if the source cobol feed
40.When do you use a normalizer?
Used perform aggregate calculation on group of records and we can use conditional clause to filter data
45.Can you use one mapping to populate two tables in different schemas?
Yes we can use
46.Explain lookup cache, various caches?
Lookup transformation used for check values in the source and target tables(primary key values).
Various Caches:
Persistent cache (we can save the lookup cache files and reuse them the next time process the lookup
transformation)
Re-cache from database (If the persistent cache not synchronized with lookup table you can configure the lookup
transformation to rebuild the lookup cache)
Static cache (When the lookup condition is true, Informatica server return a value from lookup cache and its does
not update the cache while it processes the lookup transformation)
Dynamic cache (Informatica server dynamically inserts new rows or update existing rows in the cache and the
target. Suppose if we want lookup a target table we can use dynamic cache)
Shared cache (we can share lookup transformation between multiple transformations in a mapping. 2 lookup in a
mapping can share single lookup cache)
47.Which path will the cache be created?
User specified directory. If we say c:\ all the cache files created in this directory.
48.Where do you specify all the parameters for lookup caches?
Lookup property sheet/tab.
49.How do you remove the cache files after the transformation?
After session complete, DTM remove cache memory and deletes caches files.
In case using persistent cache and Incremental aggregation then caches files will be saved.
50.What is the use of aggregator transformation?
To perform Aggregate calculation
Use conditional clause to filter data in the expression Sum(commission, Commission >2000)
Use non-aggregate function iif (max(quantity) > 0, Max(quantitiy), 0))
51.What are the contents of index and cache files?
Index caches files hold unique group values as determined by group by port in the transformation.
Data caches files hold row data until it performs necessary calculation.
52.How do you call a store procedure within a transformation?
In the expression transformation create new out port in the expression write :sp.stored procedure name(arguments)
53.Is there any performance issue in connected & unconnected lookup? If yes, How?
Yes
Unconnected lookup much more faster than connected lookup why because in unconnected not connected to any
other transformation we are calling it from other transformation so it minimize lookup cache value
Where as connected transformation connected to other transformation so it keeps values in the lookup cache.
54.What is dynamic lookup?
When we use target lookup table, Informatica server dynamically insert new values or it updates if the values exist
and passes to target table.
55.How Informatica read data if source have one relational and flat file?
Use joiner transformation after source qualifier before other transformation.
56.How you will load unique record into target flat file from source flat files has duplicate data?
There are 2 we can do this either we can use Rank transformation or oracle external table
In rank transformation using group by port (Group the records) and then set no. of rank 1. Rank transformation return
one value from the group. That the values will be a unique one.
No, We cant
58.Can you use flat file for lookup table?
No, We cant
59.Without Source Qualifier and joiner how will you join tables?
In session level we have option user defined join. Where we can write join condition.
60.Update strategy set DD_Update but in session level have insert. What will happens?
Insert take place. Because this option override the mapping level option
When we want perform multiple condition to filter out data then we go for router. (Say for example source records 50
filter condition mach 10 records remaining 40 records get filter out but still we want perform few more filter condition
to filter remaining 40 records.)
63.How did you schedule sessions in your project?
Run once (set 2 parameter date and time when session should start)
Run Every (Informatica server run session at regular interval as we configured, parameter Days, hour, minutes, end
on, end after, forever)
Customized repeat (Repeat every 2 days, daily frequency hr, min, every week, every month)
Run only on demand(Manually run) this not session scheduling.
64.How do you use the pre-sessions and post-sessions in sessions wizard, what for they used?
Post-session used for email option when the session success/failure send email. For that we should configure
Step1. Should have a informatica startup account and create outlook profile for that user
Step2. Configure Microsoft exchange server in mail box applet(control panel)
Step3. Configure informatica server miscellaneous tab have one option called MS exchange profile where we have
specify the outlook profile name.
Pre-session used for even scheduling (Say for example we dont know whether source file available or not in
particular directory. For that we write one DOS command to move file directory to destination and set event based
scheduling option in session property sheet Indicator file wait for).
65.What are different types of batches. What are the advantages and dis-advantages of a concurrent batch?
Sequential(Run the sessions one by one)
Concurrent (Run the sessions simultaneously)
Advantage of concurrent batch:
Its takes informatica server resource and reduce time it takes run session separately.
Use this feature when we have multiple sources that process large amount of data in one session. Split sessions and
put into one concurrent batches to complete quickly.
Disadvantage
Eliminating transformation errors using lower tracing level(Say for example a mapping has 50 transformation when
transformation error occur informatica server has to write in session log file it affect session performance)
68.Explain incremental aggregation. Will that increase the performance? How?
Incremental aggregation capture whatever changes made in source used for aggregate calculation in a session,
rather than processing the entire source and recalculating the same calculation each time session run. Therefore it
improve session performance.
Only use incremental aggregation following situation:
Mapping have aggregate calculation
Source table changes incrementally
Filtering source incremental data by time stamp
Before Aggregation have to do following steps:
Use filter transformation to remove pre-existing records
Reinitialize aggregate cache when source table completely changes for example incremental changes happing
daily and complete changes happenings monthly once. So when the source table completely change we have
reinitialize the aggregate cache and truncate target table use new source table. Choose Reinitialize cache in the
aggregation behavior in transformation tab
69.Concurrent batches have 3 sessions and set each session run if previous complete but 2nd fail then what will
happen the batch?
Batch will fail
General Project
70. How many mapping, dimension tables, Fact tables and any complex mapping you did? And what is your
database size, how frequently loading to DWH?
I did 22 Mapping, 4 dimension table and one fact table. One complex mapping I did for slowly changing dimension
table. Database size is 9GB. Loading data every day
71. What are the different transformations used in your project?
Aggregator, Expression, Filter, Sequence generator, Update Strategy, Lookup, Stored Procedure, Joiner, Rank,
Source Qualifier.
72. How did you populate the dimensions tables?
73. What are the sources you worked on?
Oracle
74. How many mappings have you developed on your whole dwh project?
45 mappings
75. What is OS used your project?
Windows NT
76. Explain your project (Fact table, dimensions, and database size)
Fact table contains all business measures (numeric values) and foreign key values, Dimension table contains details
about subject area like customer, product
77.What is difference between Informatica power mart and power center?
Using power center we can create global repository
Power mart used to create local repository
Global repository configure multiple server to balance session load
Local repository configure only single server
78.Have you done any complex mapping?
Developed one mapping to handle slowly changing dimension table.
79.Explain details about DTM?
Once we session start, load manager start DTM and it allocate session shared memory and contains reader and
writer. Reader will read source data from source qualifier using SQL statement and move data to DTM then DTM
transform data to transformation to transformation and row by row basis finally move data to writer then writer write
data into target using SQL statement.
th
80.What are the key you used other than primary key and foreign key?
Used surrogate key to maintain uniqueness to overcome duplicate value in the primary key.
DWH is a basic architecture (OLTP to Data warehouse from DWH OLAP analytical and report building.
82.Difference between Power part and power center?
Using power center we can create global repository
Power mart used to create local repository
Global repository configure multiple server to balance session load
Local repository configure only single server
83.What are the batches and its details?
Sequential(Run the sessions one by one)
Concurrent (Run the sessions simultaneously)
Advantage of concurrent batch:
Its takes informatica server resource and reduce time it takes run session separately.
Use this feature when we have multiple sources that process large amount of data in one session. Split sessions and
put into one concurrent batches to complete quickly.
Disadvantage
95. Can unconnected lookup do everything a connected lookup transformation can do?
No, We cant call connected lookup in other transformation. Rest of things its possible
96. In 5.x can we copy part of mapping and paste it in other mapping?
I think its possible
97. What option do you select for a sessions in batch, so that the sessions run one
after the other?
We have select an option called Run if previous completed
98. How do you really know that paging to disk is happening while you are using a lookup transformation?
Assume you have access to server?
We have collect performance data first then see the counters parameter lookup_readtodisk if its greater than 0 then
its read from disk
Step1. Choose the option Collect Performance data in the general tab session property
sheet.
Step2. Monitor server then click server-request session performance details
Step3. Locate the performance details file named called session_name.perf file in the session
log file directory
Step4. Find out counter parameter lookup_readtodisk if its greater than 0 then informatica
read lookup table values from the disk. Find out how many rows in the cache see
Lookup_rowsincache
99. List three option available in informatica to tune aggregator transformation?
Use Sorted Input to sort data before aggregation
Use Filter transformation before aggregator
Increase Aggregator cache size
100.Assume there is text file as source having a binary field to, to source qualifier What native data type
informatica will convert this binary field to in source qualifier?
Binary data type for relational source for flat file ?
101.Variable v1 has values set as 5 in designer(default), 10 in parameter file, 15 in
repository. While running session which value informatica will read?
Informatica read value 15 from repository
102. Joiner transformation is joining two tables s1 and s2. s1 has 10,000 rows and s2 has 1000 rows . Which
table you will set master for better performance of joiner
transformation? Why?
Set table S2 as Master table because informatica server has to keep master table in the cache so if it is 1000 in
cache will get performance instead of having 10000 rows in cache
103. Source table has 5 rows. Rank in rank transformation is set to 10. How many rows the rank
transformation will output?
5 Rank
104. How to capture performance statistics of individual transformation in the mapping and explain some
important statistics that can be captured?
Use tracing level Verbose data
105. Give a way in which you can implement a real time scenario where data in a table is changing and you
need to look up data from it. How will you configure the lookup transformation for this purpose?
In slowly changing dimension table use type 2 and model 1
106. What is DTM process? How many threads it creates to process data, explain each
thread in brief?
DTM receive data from reader and move data to transformation to transformation on row by row basis. Its create 2
thread one is reader and another one is writer
107. Suppose session is configured with commit interval of 10,000 rows and source has 50,000 rows explain
the commit points for source based commit & target based commit. Assume appropriate value wherever
required?
Target Based commit (First time Buffer size full 7500 next time 15000)
Commit Every 15000, 22500, 30000, 40000, 50000
Source Based commit(Does not affect rows held in buffer)
Commit Every 10000, 20000, 30000, 40000, 50000
108.What does first column of bad file (rejected rows) indicates?
First Column - Row indicator (0, 1, 2, 3)
Second Column Column Indicator (D, O, N, T)
109. What is the formula for calculation rank data caches? And also Aggregator, data, index caches?
Index cache size = Total no. of rows * size of the column in the lookup condition (50 * 4)
Aggregator/Rank transformation Data Cache size = (Total no. of rows * size of the column in the lookup condition) +
(Total no. of rows * size of the connected output ports)
110. Can unconnected lookup return more than 1 value? No
INFORMATICA TRANSFORMATIONS
Aggregator
Expression
External Procedure
Advanced External Procedure
Filter
Joiner
Lookup
Normalizer
Rank
Router
Sequence Generator
Stored Procedure
Source Qualifier
Update Strategy
XML source qualifier
Expression Transformation
-
You can use ET to calculate values in a single row before you write to the target
You can use ET, to perform any non-aggregate calculation
To perform calculations involving multiple rows, such as sums of averages, use the Aggregator. Unlike ET the
Aggregator Transformation allow you to group and sort data
Calculation
-
To use the Expression Transformation to calculate values for a single row, you must include the following ports.
Input port for each value used in the calculation
Output port for the expression
NOTE
You can enter multiple expressions in a single ET. As long as you enter only one expression for each port, you can
create any number of output ports in the Expression Transformation. In this way, you can use one expression
transformation rather than creating separate transformations for each calculation that requires the same set of data.
Sequence Generator Transformation
-
Create keys
Replace missing values
This contains two output ports that you can connect to one or more transformations. The server generates a value
each time a row enters a connected transformation, even if that value is not used.
There are two parameters NEXTVAL, CURRVAL
The SGT can be reusable
You can not edit any default ports (NEXTVAL, CURRVAL)
SGT Properties
-
Start value
Increment By
End value
Current value
Cycle
(If selected, server cycles through sequence range. Otherwise,
Stops with configured end value)
Reset
No of cached values
NOTE
-
Difference
between
Aggregator
Expression Transformation
and
We can use Aggregator to perform calculations on groups. Where as the Expression transformation permits
you to calculations on row-by-row basis only.
The server performs aggregate calculations as it reads and stores necessary data group and row data in an
aggregator cache.
When Incremental aggregation occurs, the server passes new source data through the mapping and uses historical
cache data to perform new calculation incrementally.
Components
-
Aggregate Expression
Group by port
Aggregate cache
When a session is being run using aggregator transformation, the server creates Index and data caches in memory to
process the transformation. If the server requires more space, it stores overflow values in cache files.
NOTE
The performance of aggregator transformation can be improved by using Sorted Input option. When this is selected,
the server assumes all data is sorted by group.
Incremental Aggregation
-
(1)
Using this, you apply captured changes in the source to aggregate calculation in a session. If the source changes
only incrementally and you can capture changes, you can configure the session to process only those changes
This allows the sever to update the target incrementally, rather than forcing it to process the entire source and
recalculate the same calculations each time you run the session.
Steps:
The first time you run a session with incremental aggregation enabled, the server process the entire source.
At the end of the session, the server stores aggregate data from that session ran in two files, the index file and data
file. The server creates the file in local directory.
The second time you run the session, use only changes in the source as source data for the session. The server
then performs the following actions:
For each input record, the session checks the historical information in the index file for a corresponding group, then:
If it finds a corresponding group
The server performs the aggregate operation incrementally, using the aggregate data for that group, and
saves the incremental changes.
Else
Server create a new group and saves the record data
Add a comment
17.
DEC
13
Scenario:
Router T/R is active but some people are saying some times passive, what is reason behind that?
Solution:
First of all Every Active transformation is a Passive transformation, but every passive not Active.
In Router Transformation there is a special feature with Default group. Because of Default Group
its passive. We can avoid this Default group by some transformation Settings, Now Its Active.
Add a comment
18.
19.
DEC
13
V_emp_name||
102
Siva shanker
102
Siva shanker Reddy
4. Send Emp_id and Counter to Agg, where take a max counter for each id so o/p will be
Emp_id Counter
101
3
102
6
5. Joint output of step three and 4, you will get desire output as
Emp_id Emp_name
101
Soha ali Kahn
102
Siva shanker Reddy
Add a comment
2.
DEC
13
I have a flat file, in which I have two fields emp_id, emp_name. The
data is like this- emp_id emp_name 101 soha 101 ali 101 kahn 102
Siva 102 shanker 102 Reddy How to merge the names so that my
output is like this Emp_id Emp_name 101 Soha ali Kahn 102 Siva
shanker Reddy
Scenario:
I have a flat file, in which I have two fields emp_id, emp_name. The data is like thisemp_id emp_name
101
soha
101
ali
101
kahn
102
Siva
102
shanker
102
Reddy
How to merge the names so that my output is like this
101
102
Emp_id Emp_name
Soha ali Kahn
Siva shanker Reddy
Solution:
Follow the below steps
1. user a sorter transformation and sort the data as per emp_id
V_emp_name||
4. Send Emp_id and Counter to Agg, where take a max counter for each id so o/p will be
Emp_id Counter
101
3
102
6
5. Joint output of step three and 4, you will get desire output as
Emp_id Emp_name
103
Soha ali Kahn
104
Siva shanker Reddy
Add a comment
3.
DEC
12
Add a comment
4.
5.
DEC
12
Add a comment
6.
DEC
12
Add a comment
7.
DEC
12
Scenario:
Differences between rowid and rownum
Solution:
Rowid
Rowid is an oracle internal id that is allocated
every time a new record is inserted in a table.
This ID is unique and cannot be changed by
the user.
Rowid is permanent.
Rowid is a globally unique identifier for a row
in a database. It is created at the time the row
is inserted into the table, and destroyed when
it is removed from a table.
Rownum
Rownum is a row number returned by a select
statement.
Rownum is temporary.
The rownum pseudocoloumn returns a
number indicating the order in which oracle
selects the row from a table or set of joined
rows.
Add a comment
8.
9.
DEC
12
Functions
Function should return at least one output
parameter. Can return more than one parameter
using OUT argument.
Stored procedure can be used to solve the Function can be used to calculations
business logic.
Stored procedure is a pre-compiled statement.
But function is not a pre-compiled statement.
Stored procedure accepts more than one Whereas function does not accept arguments.
argument.
Add a comment
10.
DEC
12
Add a comment
11.
DEC
12
View
Materialized view
A view has a logical existence. It does not A materialized view has a physical existence.
contain data.
Its not a database object.
It is a database object.
We cannot perform DML operation on view.
We can perform DML operation on
materialized view.
When we do select * from view it will fetch the When we do select * from materialized view it
data from base table.
will fetch the data from materialized view.
In view we cannot schedule to refresh.
In materialized view we can schedule to
refresh.
We can keep aggregated data into materialized
view. Materialized view can be created based
on multiple tables.
Materialized View
Materialized view is very essential for reporting. If we dont have the materialized view it will
directly fetch the data from dimension and facts. This process is very slow since it involves multiple
joins. So the same report logic if we put in the materialized view. We can fetch the data directly
from materialized view for reporting purpose. So that we can avoid multiple joins at report run
time.
It is always necessary to refresh the materialized view. Then it can simply perform select statement
on materialized view.
Posted 12th December 2011 by Prafull Dangore
0
Add a comment
12.
13.
DEC
12
Scenario:
SQL command to kill a session/sid
Solution:
ALTER SYSTEM KILL SESSION 'sid,serial#';
Query to find SID :
select module, a.sid,machine, b.SQL_TEXT,piece
from v$session a,v$sqltext b
where status='ACTIVE'
and a.SQL_ADDRESS=b.ADDRESS
--and a.USERNAME='NAME'
and sid=95
order by sid,piece;
Query to find serial#
select * from v$session where type = 'USER' and status = 'ACTIVE';--t0 get serial no
Add a comment
14.
DEC
12
Add a comment
15.
DEC
12
Add a comment
16.
17.
DEC
Design a mapping to load the first record from a flat file into one table
A, the last record from a flat file into table B and the remaining records
into table C?
Scenario:
Design a mapping to load the first record from a flat file into one table A, the last record from a flat
file into table B and the remaining records into table C?
Solution:
1.
2.
1.
2.
1,
2,
3,
4,
+1
row_number
5,
3. Table A - In one pipeline, send data from exp transformation to filter where you filter out first row
as O_ row_number = 1 to table A.
4. Table B - Now again there are two ways to identify last records,
1. Pass all rows from exp1 transformation to agg transformation and dont select any column in group
by port,it will sent last record to table B.
2. By using max in agg
5. Table c - Now send out of step 4 to an exp2 transformation, where you will get O_ row_number=5
then add a dummy port into a same exp with value 1 now join this exp2 with the very first exp1 so
that you will get output like below
Input, O_
row_number, O_
last_row_number
a,
1,
5
b,
2,
5
c,
3,
5
d,
4,
5
e,
5,
5
Now pass the data to filter and add condition add O_ row_number <> 1 and O_ row_number <> O_
last_row_number
Add a comment
18.
DEC
Add a comment
19.
DEC
There is target table contain only 1 column Col. Design a mapping so that the target table contains 3
rows as follows:
Col
a
b
c
Without using Normaliser transformation.
Solution:
Please follow the below steps
1. After source qualifier, send data to three different Exp transformation like pass Col1 to Exp1,Col2 to
Exp2 and Col3 to Exp3
2. Then pass data from Exp1,Exp2 & Exp3 to 3 instances of same target table.
Posted 7th December 2011 by Prafull Dangore
0
Add a comment
20.
21.
DEC
Note: You can depend on operating system native schedulers like [Windows Scheduler - Windows
or crontab - Unix] else any third party scheduling tool to run which gives more flexibility in setting
time and more control over running the job.
Posted 7th December 2011 by Prafull Dangore
0
Add a comment
22.
DEC
Add a comment
23.
DEC
run
from
window
IF
exist
E:\softs\Informatica\server\infa_shared\SrcFiles\FILE_NAME*.csv
startworkflow -sv service -d Dom -u userid -p password wf_workflow_name
Posted 6th December 2011 by Prafull Dangore
0
Add a comment
24.
25.
DEC
server:
pmcmd
Informatica Logic Building - select all the distinct regions and apply it
to 'ALL'
Scenario:
I have a task for which I am not able to find a logic. It is exception handling.
I have a column 'region' in table 'user'. 1 user can belong to more than 1 region. Total I have 10
regions. Exception is 1 user has 'ALL' in the region column. I have to select all the distinct regions
and apply it to 'ALL'. the output should have 10 records of the user corresponding to each region.
How can I equal 'ALL' to 10 regions and get 10 records into the target?
Scenario:
Please
follow
the
below
steps
1. Use two flow in your mapping in first flow pass all data with Region != 'ALL'
2. In second flow pass the data with Region=ALL to exp where create 10 output port with value of 10
region
3. then pass all columns to normalizer and in Normalizer create o/p port in which for Region port set
occurrence
to
10.
3.
Pass
data
to
target
table.
Posted 6th December 2011 by Prafull Dangore
0
Add a comment
26.
DEC
Concatenate the Data of Just the First Column of a Table in One Single Row
Solution:
Step
1:
pass
Emp_Number
to
expression
Step
2:
in
expression
transformation
use
var1
:
var2
:
var3
:
IIF(ISNULL(var1),Emp_Number,var3||'
Step
3:
In
outport
out_Emp_Number
:
Step 4: Pass this port
group by or aggregation.
through
aggregator
transformation.
transformation.
variable
port
var2
Emp_Number
'||Emp_Number)
port
var3
Don't
do
any
View comments
27.
NOV
29
The
parameter
should
be
explicity
declared
as
OUT
parameter.
3) IN
OUT
Parameter:
The IN OUT parameter allows us to pass values into a procedure and get output values from the
procedure. This parameter is used if the value of the IN parameter can be changed in the calling
program.
By using IN OUT parameter we can pass values into a parameter and return a value to the calling
program using the same parameter. But this is possible only if the value passed to the procedure
and output value have a same datatype. This parameter is used if the value of the parameter will be
changed
in
the
procedure.
The General syntax to create an IN OUT parameter is
CREATE [OR REPLACE] PROCEDURE proc3 (param_name IN OUT datatype)
The below examples show how to create stored procedures using the above three types of
parameters.
Example1:
Using
IN
and
OUT
parameter:
Lets create a procedure which gets the name of the employee when the employee id is passed.
1>
2>
3>
4>
5>
6>
7>
We can call the procedure emp_name in this way from a PL/SQL Block.
1> DECLARE
2> empName varchar(20);
3> CURSOR id_cur SELECT id FROM emp_ids;
4> BEGIN
5> FOR emp_rec in id_cur
6> LOOP
7>
emp_name(emp_rec.id, empName);
8>
dbms_output.putline('The employee ' || empName || ' has id ' || emprec.id);
9> END LOOP;
10> END;
11> /
In
the
above
PL/SQL
Block
In line no 3; we are creating a cursor id_cur which contains the employee id.
In line no 7; we are calling the procedure emp_name, we are passing the id as IN parameter and
empName
as
OUT
parameter.
In line no 8; we are displaying the id and the employee name which we got from the procedure
emp_name.
Example
2:
Using IN OUT parameter in procedures:
1> CREATE OR REPLACE PROCEDURE emp_salary_increase
2> (emp_id IN emptbl.empID%type, salary_inc IN OUT emptbl.salary%type)
3> IS
4>
tmp_sal number;
5> BEGIN
6>
SELECT salary
7>
INTO tmp_sal
8>
FROM emp_tbl
9>
WHERE empID = emp_id;
10>
IF tmp_sal between 10000 and 20000 THEN
11>
salary_inout := tmp_sal * 1.2;
12>
ELSIF tmp_sal between 20000 and 30000 THEN
13>
salary_inout := tmp_sal * 1.3;
14>
ELSIF tmp_sal > 30000 THEN
15>
salary_inout := tmp_sal * 1.4;
16>
END IF;
17> END;
18> /
The below PL/SQL block shows how to execute the above 'emp_salary_increase' procedure.
1> DECLARE
2>
CURSOR updated_sal is
3>
SELECT empID,salary
4>
FROM emp_tbl;
5>
pre_sal number;
6> BEGIN
7>
FOR emp_rec IN updated_sal LOOP
8>
pre_sal := emp_rec.salary;
9>
emp_salary_increase(emp_rec.empID, emp_rec.salary);
10>
dbms_output.put_line('The salary of ' || emp_rec.empID ||
11>
' increased from '|| pre_sal || ' to '||emp_rec.salary);
12>
END LOOP;
13> END;
14> /
Add a comment
28.
29.
NOV
29
Explicit Cursors
Explicit Cursors
An explicit cursor is defined in the declaration section of the PL/SQL Block. It is created on a
SELECT Statement which returns more than one row. We can provide a suitable name for the
cursor.
The General Syntax for creating a cursor is as given below:
CURSOR cursor_name IS select_statement;
are
four
steps
in
using
an
Explicit
Cursor.
In the above example we are creating a cursor emp_cur on a query which returns the records of
the
employees with salary greater than 5000. Here emp_tbl in the table which contains records of
all
the
employees.
2)
Accessing
the
records
in
the
cursor:
Once the cursor is created in the declaration section we can access the cursor in the execution
section of the PL/SQL program.
all
the
at
cursor.
cursor.
time.
OPEN cursor_name;
OR
FETCH cursor_name INTO variable_list;
When a cursor is opened, the first row becomes the current row. When the data is fetched it is
copied to the record or variables and the logical pointer moves to the next row and it becomes the
current row. On every fetch statement, the pointer moves to the next row. If you want to fetch after
the last row, the program will throw an error. When there is more than one row in a cursor we can
use
loops
along
with
explicit
cursor
attributes
to
fetch
all
the
records.
Points
to
remember
while
fetching
a
row:
We can fetch the rows in a cursor to a PL/SQL Record or a list of variables created in the PL/SQL
Block.
If you are fetching a cursor to a PL/SQL Record, the record should have the same structure as the
cursor.
If you are fetching a cursor to a list of variables, the variables should be listed in the same order in
the
fetch
statement
as
the
columns
are
present
in
the
cursor.
General Form of using an explicit cursor is:
DECLARE
variables;
records;
create a cursor;
BEGIN
OPEN cursor;
FETCH cursor;
process the records;
CLOSE cursor;
END;
Lets
Look
at
the
example
below
Example 1:
1> DECLARE
2>
emp_rec emp_tbl%rowtype;
3>
CURSOR emp_cur IS
4>
SELECT *
5>
FROM
6>
WHERE salary > 10;
7> BEGIN
8>
OPEN emp_cur;
9>
FETCH emp_cur INTO emp_rec;
10>
dbms_output.put_line
emp_rec.last_name);
11>
CLOSE emp_cur;
12> END;
(emp_rec.first_name
||
'
'
||
In the above example, first we are creating a record emp_rec of the same structure as of table
emp_tbl in line no 2. We can also create a record with a cursor by replacing the table name with the
cursor name. Second, we are declaring a cursor emp_cur from a select query in line no 3 - 6. Third,
we are opening the cursor in the execution section in line no 8. Fourth, we are fetching the cursor to
the record in line no 9. Fifth, we are displaying the first_name and last_name of the employee in the
record emp_rec in line no 10. Sixth, we are closing the cursor in line no 11.
Attributes
%FOUND
%NOTFOUND
%ROWCOUNT
Return values
TRUE, if fetch statement returns at
least one row.
FALSE, if fetch statement doesnt
return a row.
TRUE, , if fetch statement doesnt
return a row.
FALSE, if fetch statement returns at
least one row.
The number of rows fetched by the
fetch statement
If no row is returned, the PL/SQL
Example
Cursor_name%FOUND
Cursor_name%NOTFOUND
Cursor_name%ROWCOUNT
%ISOPEN
In the above example we are using two cursor attributes %ISOPEN and %NOTFOUND.
In line no 6, we are using the cursor attribute %ISOPEN to check if the cursor is open, if the
condition is true the program does not open the cursor again, it directly moves to line no 9.
In line no 11, we are using the cursor attribute %NOTFOUND to check whether the fetch returned
any row. If there is no rows found the program would exit, a condition which exists when you fetch
the cursor after the last row, if there is a row found the program continues.
We can use %FOUND in place of %NOTFOUND and vice versa. If we do so, we need to reverse the
logic of the program. So use these attributes in appropriate instances.
Cursor with a While Loop:
Lets modify the above program to use while loop.
1> DECLARE
2> CURSOR emp_cur IS
3> SELECT first_name, last_name, salary FROM emp_tbl;
4> emp_rec emp_cur%rowtype;
5> BEGIN
6>
IF NOT sales_cur%ISOPEN THEN
7>
OPEN sales_cur;
8>
END IF;
9>
FETCH sales_cur INTO sales_rec;
10> WHILE sales_cur%FOUND THEN
11> LOOP
12>
dbms_output.put_line(emp_cur.first_name || ' ' ||emp_cur.last_name
13>
|| ' ' ||emp_cur.salary);
15>
FETCH sales_cur INTO sales_rec;
16> END LOOP;
17> END;
18> /
In the above example, in line no 10 we are using %FOUND to evaluate if the first fetch statement in
line no 9 returned a row, if true the program moves into the while loop. In the loop we use fetch
statement again (line no 15) to process the next row. If the fetch statement is not executed once
before the while loop the while condition will return false in the first instance and the while loop is
skipped. In the loop, before fetching the record again, always process the record retrieved by the
first fetch statement, else you will skip the first row.
Cursor with a FOR Loop:
When using FOR LOOP you need not declare a record or variables to store the cursor values, need
not open, fetch and close the cursor. These functions are accomplished by the FOR LOOP
automatically.
General Syntax for using FOR LOOP:
FOR record_name IN cusror_name
LOOP
process the row...
END LOOP;
Lets use the above example to learn how to use for loops in cursors.
1> DECLARE
2> CURSOR emp_cur IS
3> SELECT first_name, last_name, salary FROM emp_tbl;
4> emp_rec emp_cur%rowtype;
5> BEGIN
6> FOR emp_rec in sales_cur
7> LOOP
8> dbms_output.put_line(emp_cur.first_name || ' ' ||emp_cur.last_name
9>
|| ' ' ||emp_cur.salary);
10> END LOOP;
11>END;
12> /
In the above example, when the FOR loop is processed a record emp_recof structure emp_cur
gets created, the cursor is opened, the rows are fetched to the record emp_rec and the cursor is
closed after the last row is processed. By using FOR Loop in your program, you can reduce the
number
of
lines
in
the
program.
NOTE: In the examples given above, we are using backward slash / at the end of the program. This
indicates the oracle engine that the PL/SQL program has ended and it can begin processing the
statements.
Posted 29th November 2011 by Prafull Dangore
0
Add a comment
30.
NOV
24
to
check
table
size
in
oracle
9i
Solution
?
:
select
segment_name
sum(bytes)/(1024*1024)
table_name,
table_size_meg
from
user_extents
where
and
segment_name
GROUP BY segment_name
segment_type='TABLE'
'TABLE_NAME'
Add a comment
31.
NOV
Step 2: Create a mapping variable, $$MappingDateVariable and hard code the value from which
date you need to extract.
Step 3: In the mapping, use the variable function to set the variable value to increment one day each
time the session runs.
lets say you set the initial value of $$MappingDateVariable as 11/16/2010. The first time the
integration service runs the session, it reads only rows dated 11/16/2010. And would set
$$MappingDateVariable to 11/17/2010. It saves 11/17/2010 to the repository at the end of the
session. The next time it runs the session, it reads only rows from 11/17/2010.
Add a comment
32.
33.
NOV
Add a comment
34.
NOV
i get out put file frist field like #id,e_ID,pt_Status, but i dont want #
Scenario:
my source .csv stg oracle tgt .csv i get out put first field #along with columan name.
and i want to delete dummy files my server is windows
Solution:
It is not a problem.You need to provide the targetfile path and the name in the input filename and
output filename you can provide the file location and the name you want to have in the target file
(final file).
Ex:
Oracle_emp(source)--> SQ-->Logic-->TGT(emp.txt)(Flatfile)
In post session sucess command
sed 's/^#//g' d:\Informaticaroot\TGTfiles\ emp.txt > d:\Informaticaroot\TGTfiles\
Add a comment
35.
OCT
21
Can
from
source
the
anyone
is
coming
output
help
me
as
NEW_ID
---------102
103
104
106
108
below.
NEW_ID
-----------104
104
104
108
as
todo
this
below
in
informatica.
Solution:
Mapping
Sq
def
-->
Exp1
-->
---->Jnr
---->
Exp2
TGT
Agg
Explnation:
In
exp1
,add
OLD_ID
---------101
102
103
105
106
tow
NEW_ID
variable
shown
below
Diff_of_rows
---------102
103
104
106
108
1
1
1
2
1
New_id
----------------(1)
(102-101)
(103-102)
(105-103)
(106-105)
---------1
1
1
2
2
Diff_of_rows - you have to maintain the old_id of prev row in exp variable,then you have to minus it
with
current
row
lod_id
New_id - starting with one, if value of prev row of Diff_of_rows does not match with current row
Diff_of_rows,increment
value
of
new_id
by
1.
Thane
send
below
OLD_ID
---------101
102
103
105
106
and
NEW_ID
---------104
108
rows
to
NEW_ID
2
New_id
----------
102
103
104
106
108
in
Exp
Agg
---------1
1
1
2
2
o/p
New_id
---------1
2
Then join exp2 o/p with agg o/p based on New_id column so you will get required o/p
OLD_ID
---------101
102
103
105
NEW_ID
-----------104
104
104
108
106
108
Add a comment
36.
37.
OCT
21
am
trying
to
abort
session
in
the
workflow
monitor
by
using
'Abort'
option.
But the status of the session is still being shown as 'Aborting' and remains same for the past 4 days.
Finally
had
to
request
the
UNIX
team
to
kill
the
process.
Could anybody let me know the reason behind this as I couldn't find any info in the log file as well.
Solution:
- If the session you want to stop is a part of batch, you must stop the batch
If
the
batch
is
part
of
nested
batch,
stop
the
outermost
batch
- When you issue the stop command, the server stops reading data. It continues processing and
writing
data
and
committing
data
to
targets
- If the server cannot finish processing and committing data, you can issue the ABORT command. It
is similar to stop command, except it has a 60 second timeout. If the server cannot finish processing
and committing data within 60 seconds, You need to kill DTM process and terminates the session.
As
you
said
to
kill
the
process
we
need
to
contact
UNIX
Admin.
But last time I coordinated with oracle team updated OPB table info related workflow status.
Add a comment
38.
OCT
20
LENGTH(LTRIM(RTRIM(column_name)))<>0
in
filter
transformation.
OR
IIF(ISNULL(column_name) or LTRIM(RTRIM(column_name)) = '', 0, 1) -- do this in exp t/n and use
this
flag
in
filte.
Posted 20th October 2011 by Prafull Dangore
0
Add a comment
39.
OCT
20
have
my
source
Line-no
3
4
2
2
1
data
like
below:
Text
DI-9001
DI-9003
PR-031
DI-9001
PR-029
874
2
DI-9003
874
1
PR-031
959
1
PR-019
Now
I
want
my
target
to
be
ID
Line-no
Text
529
3
DI-9001
529
4
DI-9003
840
2
PR-031&DI-9001
616
1
PR-029
874
2
DI-9003
874
1
PR-031
959
1
PR-019
It means if both the ID and the LINE_NO both are same then the TEXT should concatenate, else no
change
Solution:
The
mapping
flow
like
this:
source-->sq-->srttrans-->exptrans--->aggtrans--->target
srttrans--->sort
by
exp-->use
ID,
line_no
variable
ID(I/O)
Line_no(i/o)
Text(i)
text_v
:
iif(ID=pre_id
pre_id(v):ID
pre_line_no(v):Line_no
Text_op:Text_v
Aggtrans-->group
concatenation
by
ID
and
ASC
ports
and
as
Line_no=pre_line_no,Text_v||'&'||Text,Text)
Line_no.
It
will
of
return
last
row
Add a comment
40.
41.
OCT
20
order
which
is
text
I
field1
have
a
=
record
like
"id='102',name='yty,wskjd',city='eytw'
this:
"
Note: sometimes data might come as [id,name, city] or sometimes it might come as
[code,name,id].
It
varies...
I
need
value1
value2
value3
to
store
the
value
of
field1
into
different
fields,
= id='102'
= name='yty,wskjd'
= city='eytw'
If I split the record based on comma(,) then the result wont come as expected as there is a
comma(,)
in
value
of
name.
Is there a way where we can achieve the solution in easier way i.e., if a comma comes in
between two single quotes then we have to suppress the comma(,). I gave a try with
different inbuilt functions but couldnt make it. Is there a way to read the data in between 2
single quotes ???
Solution:
Please
Field1
try
below
=
solution
it
may
help
in
some
extent
"id='102',name='yty,wskjd',city='eytw'
"
Steps
1.
v_1
=
Replace(field1,
'"'
,
'')
--i.e.
no
space
2. v_2 = substring-after ( v_1, id= )
--O/P '102',name='yty,wskjd',city='eytw'
3.
v_3
=
substring-after
(
v_1,
name=
)
--O/P
'yty,wskjd',city='eytw'
4.
v_4
=
substring-after
(
v_1,
city=
)
--O/P
'eytw'
5.
v_5
=
substring-before(v_2,
name)
--O/P
'102',
6.
v_6
=
substring-before
(
v_3,
city
)
--O/P
'yty,wskjd',
7.
value1
=
replace(v_5,',','')
--O/P
'102'
8.
value3
=
replace(v_6,',','')
--O/P
'yty,wskjd'
7. value3 = v_4 --O/P 'eytw'
Add a comment
42.
OCT
20
Scenario:
how to load new data from one table to another.
For eg: i have done a mapping from source table (Which contain bank details) to target
table.
For first time i will load all the data from source to target, if i have run the
mapping second day,
i need to get the data which is newly entered in the source table.
First time it have to load all the data from source to target, for second or third
time, if there is any new record in the source table, only that record must load to the target,
by comparing both source and the target.
How to use the lookup transformation for this issue?
Solution:
1) In mapping, create a lookup on target table and select dynamic lookup cache in property tab, once
you check it you can see NewLookupRow column in lookup port through which you can identify
whether incoming rows are new or existing. So after lookup you can use router to insert or update it
in target table.
Also in lookup port, you can use associate port to compare the specific/all columns of target table
lookup with source column.its a connected lookup where you send a source rows to lookup as input
and/or output ports and lookup ports as output and lookup.
OR
2) If there is any primary key column in the target table then we can create a lookup on the target
table and match the TGT primary key with the source primary key.If the lookup finds a match then
ignore those records ,if there is no match then insert those record into the target
The logic should be as below
SQ--> LKP--> FILTER-->TGT
In lookup match the ID column from src with ID column in the target .The lookup will return the ID's
if it is avail in the target else it will return null value.
In filter allow only null ID values which is returing from the lookup.
OR
3. If you have any datestamp in the source table then you can pull only the newly inserted records
from the source table based on the time stamp (this approach will applicable only if the source table
has a lastmodifieddate column).
Add a comment
43.
OCT
20
Add a comment
44.
45.
OCT
19
Lookup
performance
Unwanted
columns:
By default, when you create a lookup on a table, PowerCenter gives you all the columns in
the table, but be sure to delete the unwanted columns from the lookup as they affect the
lookup cache very much. You only need columns that are to be used in lookup condition and
the
ones
that
have
to
get
returned
from
the
lookup.
SQL
query:
We will start from the database. Find the execution plan of the SQL override and see if you
can add some indexes or hints to the query to make it fetch data faster. You may have to
take the help of a database developer to accomplish this if you, yourself are not an SQLer.
Size
of
the
source
versus
size
of
lookup:
Let us say, you have 10 rows in the source and one of the columns has to be checked
against a big table (1 million rows). Then PowerCenter builds the cache for the lookup table
and then checks the 10 source rows against the cache. It takes more time to build the
cache of 1 million rows than going to the database 10 times and lookup against the table
directly.
Use uncached lookup instead of building the static cache, as the number of source rows is
quite
less
than
that
of
the
lookup.
Conditional
call
of
lookup:
Instead of going for connected lookups with filters for a conditional lookup call, go for
unconnected lookup. Is the single column return bothering for this? Go ahead and change
the SQL override to concatenate the required columns into one big column. Break them at
the
calling
side
into
individual
columns
again.
JOIN
instead
of
Lookup:
In the same context as above, if the Lookup transformation is after the source qualifier and
there is no active transformation in-between, you can as well go for the SQL over ride of
source qualifier and join traditionally to the lookup table using database joins, if both the
tables
are
in
the
same
database
and
schema.
Increase
cache:
If none of the above seems to be working, then the problem is certainly with the cache. The
cache that you assigned for the lookup is not sufficient to hold the data or index of the
lookup. Whatever data that doesn't fit into the cache is spilt into the cache files designated
in $PMCacheDir. When the PowerCenter doesn't find the data you are lookingup in the
cache, it swaps the data from the file to the cache and keeps doing this until it finds the
data. This is quite expensive for obvious reasons being an I/O operation. Increase the cache
so
that
the
whole
data
resides
in
the
memory.
What if your data is huge and your whole system cache is less than that? Don't promise
PowerCenter the amount of cache that it can't be allotted during the runtime. If you
promise 10 MB and during runtime, your system on which flow is running runs out of cache
and can only assign 5MB. Then PowerCenter fails the session with an error.
Cachefile
file-system:
In many cases, if you have cache directory in a different file-system than that of the hosting
server, the cache file piling up may take time and result in latency. So with the help of your
system
administrator
try
to
look
into
this
aspect
as
well.
Useful
cache
utilities:
If the same lookup SQL is being used in someother lookup, then you have to go for
shared cache or reuse the lookup. Also, if you have a table that doesn't get data
updated or inserted quite often, then use the persistent cache because the
consecutive runs of the flow don't have to build the cache and waste time.
Add a comment
46.
OCT
19
repository
is
the
highest
physical
entity
of
project
in
PowerCenter.
that schema. It also tells about the target flat file. In which physical location the file is
going
to
get
created.
A transformation is a sub-program that performs a specific task with the input it gets
and returns some output. It can be assumed as a stored procedure in any database.
Typical examples of transformations are Filter, Lookup, Aggregator, Sorter etc.
A set of transformations, that are reusable can be built into something called mapplet.
A mapplet is a set of transformations aligned in a specific order of execution.
As with any other tool or programing language, PowerCenter also allows parameters to
be passed to have flexibility built into the flow. Parameters are always passed as data in
flat files to PowerCenter and that file is called the parameter file.
Posted 19th October 2011 by Prafull Dangore
0
Add a comment
47.
OCT
19
file
format
for
PowerCenter:
For a workflow parameter which can be used by any session in the workflow, below is the format in
which
the
parameter
file
has
to
be
created.
[Folder_name:WF.Workflow_Name]
$$parameter_name1=value
$$parameter_name2=value
For a session parameter which can be used by the particular session, below is the format in which
the
parameter
file
has
to
be
created.
[Folder_name:WF.Workflow_Name:ST.Session_Name]
$$parameter_name1=value
$$parameter_name2=value
3.
Parameter
To
have
handling
flexibility
in
in
maintaining
data
the
parameter
model:
files.
To reduce the overhead for the support to change the parameter file every time a value of a
parameter
changes
To
ease
the
deployment,
all the parameters have to be maintained in Oracle or any database tables and a PowerCenter
session is created to generate the parameter file in the required format automatically.
For
this,
1.
FOLDER
tables
are
table
will
to
have
be
created
entries
in
for
the
each
database:
folder.
2. WORKFLOWS table will have the list of each workflow but with a reference to the FOLDERS table
to
say
which
folder
this
workflow
is
created
in.
3. PARAMETERS table will hold all the parameter names irrespective of folder/workflow.
4. PARAMETER_VALUES table will hold the parameter of each session with references to
PARMETERS table for parameter name and WORKFLOWS table for the workflow name. When the
session name is NULL, that means the parameter is a workflow variable which can be used across
all the sessions in the workflow.
To get the actual names because PARAMETER_VALUES table holds only ID columns of workflow
and parameter, we create a view that gets all the names for us in the required format of the
parameter file. Below is the DDL for the view.
c. WORKFLOWS table
ID (NUMBER)
WORKFLOW_NAME (varchar50)
FOLDER_ID (NUMBER) Foreign Key to FOLDER.ID
DESCRIPTION (varchar50)
d. PARAMETERS table
ID (NUMBER)
PARAMETER_NAME (varchar50)
DESCRIPTION (varchar50)
e. PARAMETER_VALUES table
ID (NUMBER)
WF_ID (NUMBER)
PMR_ID (NUMBER)
LOGICAL_NAME (varchar50)
VALUE (varchar50)
SESSION_NAME (varchar50)
LOGICAL_NAME is a normalization initiative in the above parameter logic. For example, in a
mapping if we need to use $$SOURCE_FX as a parameter and also $$SOURCE_TRANS as
another mapping parameter, instead of creating 2 different parameters in the PARAMETERS table,
we create one parameter $$SOURCE. Then FX and TRANS will be two LOGICAL_NAME records of
the PARAMETER_VALUES table.
m_PARAMETER_FILE is the mapping that creates the parameter file in the desired format and the
corresponding session name is s_m_PARAMETER_FILE.
Add a comment
48.
49.
OCT
19
Once this is done, the job is done. When you want to create the file name with a timestamp attached
to it, just use a port from an Expression transformation before the target to pass a value of Output
Port with expression $$FILE_NAMEto_char(sessstarttime, 'YYYYMMDDHH24:MISS')'.csv'.
Please note that $$FILE_NAME is a parameter to the mapping and I've used sessstarttime because
it will be constant through out the session run.
If you use sysdate, it will change if you have 100s of millions of records and if the session may run
for an hour, each second a new file will get created.
Please note that a new file gets created with the current value of the port when the port value which
maps to the FileName changes.
We'll come to the mapping again. This mapping generates two files. One is a dummy file with zero
bytes size and the file name is what is given in the Session properties under 'Mappings' tab for target
file name. The other file is the actual file created with the desired file name and data.
Posted 19th October 2011 by Prafull Dangore
0
Add a comment
50.
OCT
18
my
table
structure
Solution:
Hey u will get ths info through INFA metadata tables
Add a comment
51.
OCT
18
is
If you are using seq and aggregator then the mapping flow should be like below
-->AGG
-->JNR-->RTR-->TGT1
-->TGT2
SRC-->SQ-->EXP
SEQ-->
-->TGT3
In
router
if
seq
value
=1
then
that
record
will
go
to
target1
if seq value and agg count out put equal that means that is last record so it has to go to target 3
the remaining all records has to pass to target 2.
for sql query to get first, last and remaining records try the below
For
select * from emp where rownum=1;
First
record:
For
Last
record:
select * from (select * from (select empno,ename,sal,job,mgr,rownum from emp) order by rownum
DESC) where rownum=1;
For
remaining
record
you
can
use
minus
function
with
above
out
puts.
Add a comment
52.
53.
OCT
18
Create a workflow variable $$Datestamp as datetime datatype.In assignment task assign the
sysdate to that variable and in email subject use the $$Datestamp variable and it will send the
timestamp in the subject.
Posted 18th October 2011 by Prafull Dangore
0
Add a comment
54.
OCT
18
should
should
contain
contain
the
the
following
output
following
output
Solution:
Try the following approach.
SRC-->SQ-------------->
JNR-->
AGG-->
RTR-->TGT1
-->TGT2
from source pass all the data to aggregator and group by source column. one one out put port
count(column)
so
from
COLUMN,COUNT
A,1
B,2
C,2
D,1
agg
you
have
two
ports
out
puts
of
joiner
will
be
like
below
In router create two groups one for Unique and another one for duplicate
Unique=(count=1)
Duplicate=(count>1)
Add a comment
55.
OCT
18
an
3
ENAM
XXX
JJJJ
KKK
HJJH
tables
HIREDATE
MAY/25/2009
OCT/12/2010
JAN/02/2011
AUG/12/2012
S-ID
OO
OO
OO
V-ID
DD
DD
DD
Using informatica source qualifier or other transformations I should be able to club the above tables
in such a way that if the HIREDATE>JAN/01/2011 then eno should select v-id and if
HIREDATE<JAN/01/2011 the ENO should select s-id and make a target table leaving the ID
columns blank based on condition IT SHOULD HAVE EITHER S-ID OR V-ID BUT NOT BOTH .
ENO
ENAM
HIREDATE
S-ID
V-ID
Please
give
me
the
best
advice
for
the
following
situation.
Solution:
Better
do
select
CASE
WHEN
THEN
ELSE
END
from
where
and b.eno=c.eno;
it
in
source
qualifier
sql
query
by
case
statement
ENO,ENAM,HIREDATE,
(HIREDATE<JAN/01/2011
table2.s-id
table3.s-id
table1
a,table2
b,table3
c
a.eno=b.eno
OR
You
Second
table
can
and
third
use
table
can
be
lookup
used
In
an
s_id=
IF
(HIREDATE<JAN/01/2011,
v_id=IF(HIREDATE>JAN/01/2011,lkp_3rd_table,NULL )
as
.
lookup.
expression:
lkp_2nd_tbl,NULL)
Add a comment
56.
57.
OCT
18
I have 2 ports going through a dynamic lookup, and then to a router. In the router it is a
simple
case of inserting new target rows (NewRowLookup=1) or rejecting existing rows
(NewRowLookup=0).
However,
when
I
run
the
session
I'm
getting
the
error:
"CMN_1650 A duplicate row was attempted to be inserted into a dynamic lookup cache
Dynamic lookup error. The dynamic lookup cache only supports unique condition keys."
I thought that I was bringing through duplicate values so I put a distinct on the SQ.
There
is
also
a
not
null
filter
on
both
ports.
However, whilst investigating the initial error that is logged for a specific pair of values
from the source, there is only 1 set of them (no duplicates). The pair exists on the target
so surely should just return from the dynamic lookup newrowlookup=0.
Is this some kind of persistent data in the cache that is causing this to think that it is
duplicate data? I haven't got the persistent cache or recache from database flags
checked.
Solution:
This occurs when the table on which the lookup is built has duplicate rows. Since a
dynamic cached lookup cannot be created with duplicate rows, the session fails with
this
error.
Make sure there are no duplicate rows in the table before starting the session. OR Do a
Select DISTINCT in the lookup cache SQL.
OR
Make sure the data types of source and look up fields match and extra spaces are
trimmed, looks like the match is failing between src and lkp so the lookup is trying to
insert the row in cache even though its present already.
Add a comment
58.
OCT
18
target
has
rows
like
this:
Col1
---1
2
3
Col2
Col3
A
A
B
value1
value2
value3
I want to delete the record from the target which has the combination (Col1="2" and Col2="A"). Will
linking the fields Col1 and Col2 from the Update Strategy transformation to the Target serve the
purpose?
Solution:
Define both the columns as primary key in target definition and link only col1 and col2 in mapping.
This
will
serve
your
purpose.
BTW, if you do only delete then update strategy is not required at all.
Posted 18th October 2011 by Prafull Dangore
0
Add a comment
59.
OCT
18
Solution:
In
slowly
growing
targets
(Delta
loads)
target
is
loaded
incrementally.
You need to know a particular record is existing or not in target target.
Look up is used to cache the Target records and compare the incoming records
with
the
records
in
Target.
If
incoming
record
is
new
it
will
be
insert
in
target
otherwise
not.
Expression is used to flag a record whether it is a new or existing.
If
it
is
new
Record
is
flagged
as
'I'
with
the
sense
of
Insert.
In
Slowly
Changing
Dimensions(SCD),
History
of
Hence if a record exists in the Target and if it
will be flagged as 'U' with sense of Update.
dimension
needs to
is
maintained.
update then it
Add a comment
60.
61.
OCT
17
all,
advise.
Solutions:
Hello,
If
you
can
make
a
bit
changes
to
ur
mapping
u
can
achive
it.
1. First delete the record which is not been used from last 10 days in per-sql instead of deleting at
the
end.
2.
load
the
all
data
in
temp
table
including
old
and
new.
3. Now load all the data in target table with sequence generator.in sg change the setting so that its
value reset to 0 for every new run.
OR
Add a comment
62.
OCT
17
Surrogate Key
Scenario:
What is a surrogate key and where do you use it?
Solution:
A
surrogate
key
is
substitution
for
the
natural
primary
key.
It is just a unique identifier or number for each row that can be used for the primary key to the table.
The only requirement for a surrogate primary key is that it is unique for each row in the table.
Data warehouses typically use a surrogate, (also known as artificial or identity key), key for the
dimension tables primary keys. They can use Infa sequence generator, or Oracle sequence, or SQL
Server
Identity
values
for
the
surrogate
key.
It is useful because the natural primary key (i.e. Customer Number in Customer table) can change
and
this
makes
updates
more
difficult.
Some tables have columns such as AIRPORT_NAME or CITY_NAME which are stated as the
primary keys (according to the business users) but ,not only can these change, indexing on a
numerical value is probably better and you could consider creating a surrogate key called, say,
AIRPORT_ID. This would be internal to the system and as far as the client is concerned you may
display
only
the
AIRPORT_NAME.
Add a comment
63.
OCT
14
unique
constraint
errors
(INF_PRACTICE1.SYS_C00163872)
occurred:
violated
Database
driver
error.
Function
Name
:
Execute
SQL Stmt : INSERT INTO D_CLAIM_INJURY_SAMPLEE(CK_SUM,DM_ROW_PRCS_DT,DM_RO
W_PRCS_UPDT_DT,CLAIM_INJRY_SID,DM_CRRNT_ROW_IND,INCDT_ID,ENAME,J
OB,FIRSTNAME,LASTNAME) VALUES
( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
Database
driver
error...
Function
Name
:
Execute
Multiple
SQL Stmt : INSERT INTO D_CLAIM_INJURY_SAMPLEE(CK_SUM,DM_ROW_PRCS_DT,DM_RO
W_PRCS_UPDT_DT,CLAIM_INJRY_SID,DM_CRRNT_ROW_IND,INCDT_ID,ENAME,J
OB,FIRSTNAME,LASTNAME) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
Solution:
check the definition of unique index columns and then below query on source to fine out thd
duplicate
rows.
if
create
select
from
group
having
index
index
def
on
like
targettable(col1,col2,col3);
col1,col2,col3,count(1)
sourcetable
col1,col2,col3
count(1)>1
by
either u have to delete those records from source or use agg in informatica mapping
Posted 14th October 2011 by Prafull Dangore
0
Add a comment
64.
65.
OCT
13
Add a comment
66.
OCT
13
there
are
few
values
1,2,3
for
ABC
Then we can have filter in the mapping having source table with column ABC.
Filter the records with condition ABC=1,ABC=2,ABC=3 and load target tables
in
three
different
mappings.
Create
three
different
sessions
and
then
use
decision
task
in
workflow
level
as
If
tgtsuccessrows=1
for
session1
then
run
worklet1
If
tgtsuccessrows=2
for
session2
then
run
worklet2
If tgtsuccessrows=2 for session3 then run worklet3
Posted 13th October 2011 by Prafull Dangore
0
Add a comment
67.
OCT
13
Under the Repository database, there must be folders that you have created. Open that folder and
then
right
click
and
goto
version->find
checkouts->all
users.
This will show the history of changes made and saved on that particular code. It will show you details
such as last check-out, last-saved time, saved-by, etc.
Posted 13th October 2011 by Prafull Dangore
0
Add a comment
68.
69.
OCT
13
Add a comment
70.
OCT
13
Compare the Total Number of Rows in a Flat File with the Footer of
the Flat File
Scenario : I have a requirement where I need to find the number of rows in the flat file and then
compare the row count with the row count mentioned in the footer of the flat file.
Solution :
Using Infomratica:
I believe you can identify the data records from the trailer record. you can use following method to
identify
the
count
of
the
records
1. use router to create two data streams ; one for data records & other for trailer record
2. use aggregator (with out defining any group key) and use count() aggregate function
now
both
data
stream
will
have
single
record.
3.use
joiner
to
get
one
record
from
these
two
data
streams
it
will
give
you
two
different
count
ports
in
single
record
4. use expression for comparing the counts and proceed as per you rules.
Using UNIX :
If you are on Unix, then go for a couple of line script or commands:
Count number of lines in file by wc -l. Assign the count to variable x = (wc -l) - 1 i.e. neglecting footer
record.
Grep the number of records from footer using grep/sed. Assign it to variable y.
Now equate both these variables and take decision.
Add a comment
71.
OCT
11
Use Indirect option in session properties and give file_list name. In the file list you can have actual
file names with complete path.
Ex: In Session Properties SourceFileType --- Indirect and File Name ABC.txt
ABC.txt will contain all the input file names with complete path.
like
/home/.../...filename.dat
/home/.../...filename1.dat
Add a comment
72.
73.
OCT
11
Add a comment
74.
OCT
11
Solution 1. Edit source defn by removing CurrentlyProcessedFileName port and add again, this should solve
your problem.
Posted 11th October 2011 by Prafull Dangore
0
Add a comment
75.
APR
19
END IF;
EXCEPTION
WHEN INVALID_NUMBER THEN
DBMS_OUTPUT.PUT_LINE('HANDLING INVALID INPUT BY ROLLING BACK.');
ROLLBACK;
END;
/
How PL/SQL Exceptions Propagate
When an exception is raised, if PL/SQL cannot find a handler for it in the current block or subprogram, the
exception propagates. That is, the exception reproduces itself in successive enclosing blocks until a handler is
found or there are no more blocks to search. If no handler is found, PL/SQL returns an unhandled exception
error to the host environment.
Exceptions cannot propagate across remote procedure calls done through database links. A PL/SQL block
cannot catch an exception raised by a remote subprogram. For a workaround, see "Defining Your Own Error
Messages: Procedure RAISE_APPLICATION_ERROR".
Figure 10-1, Figure 10-2, and Figure 10-3 illustrate the basic propagation rules.
Figure 10-1 Propagation Rules: Example 1
EXCEPTION
WHEN exception1 THEN -- handler for exception1
sequence_of_statements1
WHEN exception2 THEN -- another handler for exception2
sequence_of_statements2
...
WHEN OTHERS THEN -- optional handler for all other errors
sequence_of_statements3
END;
To catch raised exceptions, you write exception handlers. Each handler consists of a WHEN clause, which
specifies an exception, followed by a sequence of statements to be executed when that exception is raised.
These statements complete execution of the block or subprogram; control does not return to where the
exception was raised. In other words, you cannot resume processing where you left off.
The optional OTHERS exception handler, which is always the last handler in a block or subprogram, acts as
the handler for all exceptions not named specifically. Thus, a block or subprogram can have only
one OTHERS handler. Use of the OTHERS handler guarantees that no exception will go unhandled.
If you want two or more exceptions to execute the same sequence of statements, list the exception names in
the WHEN clause, separating them by the keyword OR, as follows:
EXCEPTION
WHEN over_limit OR under_limit OR VALUE_ERROR THEN
-- handle the error
If any of the exceptions in the list is raised, the associated sequence of statements is executed. The
keyword OTHERS cannot appear in the list of exception names; it must appear by itself. You can have any
number of exception handlers, and each handler can associate a list of exceptions with a sequence of
statements. However, an exception name can appear only once in the exception-handling part of a PL/SQL
block or subprogram.
The usual scoping rules for PL/SQL variables apply, so you can reference local and global variables in an
exception handler. However, when an exception is raised inside a cursor FOR loop, the cursor is closed
implicitly before the handler is invoked. Therefore, the values of explicit cursor attributes are not available in
the handler.
Exceptions Raised in Declarations
Exceptions can be raised in declarations by faulty initialization expressions. For example, the following
declaration raises an exception because the constant credit_limit cannot store numbers larger than 999:
Example 10-10 Raising an Exception in a Declaration
DECLARE
credit_limit CONSTANT NUMBER(3) := 5000; -- raises an error
BEGIN
NULL;
EXCEPTION
WHEN OTHERS THEN
-- Cannot catch the exception. This handler is never called.
DBMS_OUTPUT.PUT_LINE('Can''t handle an exception in a declaration.');
END;
/
Handlers in the current block cannot catch the raised exception because an exception raised in a declaration
propagates immediately to the enclosing block.
Handling Exceptions Raised in Handlers
When an exception occurs within an exception handler, that same handler cannot catch the exception. An
exception raised inside a handler propagates immediately to the enclosing block, which is searched to find a
handler for this new exception. From there on, the exception propagates normally. For example:
EXCEPTION
WHEN INVALID_NUMBER THEN
INSERT INTO ... -- might raise DUP_VAL_ON_INDEX
WHEN DUP_VAL_ON_INDEX THEN ... -- cannot catch the exception
END;
Branching to or from an Exception Handler
A GOTO statement can branch from an exception handler into an enclosing block.
A GOTO statement cannot branch into an exception handler, or from an exception handler into the current
block.
Retrieving the Error Code and Error Message: SQLCODE and SQLERRM
In an exception handler, you can use the built-in functions SQLCODE and SQLERRM to find out which error
occurred and to get the associated error message. For internal exceptions, SQLCODE returns the number of
the Oracle error. The number that SQLCODE returns is negative unless the Oracle error is no data found, in
which case SQLCODE returns +100. SQLERRM returns the corresponding error message. The message begins
with the Oracle error code.
For user-defined exceptions, SQLCODE returns +1 and SQLERRM returns the message User-Defined
Exception unless you used the pragma EXCEPTION_INIT to associate the exception name with an Oracle error
number, in which case SQLCODE returns that error number and SQLERRM returns the corresponding error
message. The maximum length of an Oracle error message is 512 characters including the error code, nested
messages, and message inserts such as table and column names.
If no exception has been raised, SQLCODE returns zero and SQLERRM returns the message: ORA-0000:
normal, successful completion.
You can pass an error number to SQLERRM, in which case SQLERRM returns the message associated with
that error number. Make sure you pass negative error numbers to SQLERRM.
Passing a positive number to SQLERRM always returns the message user-defined exception unless you
pass +100, in which caseSQLERRM returns the message no data found. Passing a zero to SQLERRM always
returns the message normal, successful completion.
You cannot use SQLCODE or SQLERRM directly in a SQL statement. Instead, you must assign their values to
local variables, then use the variables in the SQL statement, as shown in Example 10-11.
Example 10-11 Displaying SQLCODE and SQLERRM
CREATE TABLE errors (code NUMBER, message VARCHAR2(64), happened TIMESTAMP);
DECLARE
name employees.last_name%TYPE;
v_code NUMBER;
v_errm VARCHAR2(64);
BEGIN
SELECT last_name INTO name FROM employees WHERE employee_id = -1;
EXCEPTION
WHEN OTHERS THEN
v_code := SQLCODE;
v_errm := SUBSTR(SQLERRM, 1 , 64);
DBMS_OUTPUT.PUT_LINE('Error code ' || v_code || ': ' || v_errm);
-- Normally we would call another procedure, declared with PRAGMA
-- AUTONOMOUS_TRANSACTION, to insert information about errors.
catch the exception. When the sub-block ends, the enclosing block continues to execute at the point where the
sub-block ends, as shown in Example 10-12.
Example 10-12 Continuing After an Exception
DECLARE
sal_calc NUMBER(8,2);
BEGIN
INSERT INTO employees_temp VALUES (303, 2500, 0);
BEGIN -- sub-block begins
SELECT salary / commission_pct INTO sal_calc FROM employees_temp
WHERE employee_id = 301;
EXCEPTION
WHEN ZERO_DIVIDE THEN
sal_calc := 2500;
END; -- sub-block ends
INSERT INTO employees_temp VALUES (304, sal_calc/100, .1);
EXCEPTION
WHEN ZERO_DIVIDE THEN
NULL;
END;
/
In this example, if the SELECT INTO statement raises a ZERO_DIVIDE exception, the local handler catches it
and sets sal_calc to 2500. Execution of the handler is complete, so the sub-block terminates, and execution
continues with the INSERT statement. See alsoExample 5-38, "Collection Exceptions".
You can also perform a sequence of DML operations where some might fail, and process the exceptions only
after the entire operation is complete, as described in "Handling FORALL Exceptions with the
%BULK_EXCEPTIONS Attribute".
Retrying a Transaction
1.
2.
3.
After an exception is raised, rather than abandon your transaction, you might want to retry it. The technique
is:
Encase the transaction in a sub-block.
Place the sub-block inside a loop that repeats the transaction.
Before starting the transaction, mark a savepoint. If the transaction succeeds, commit, then exit from the
loop. If the transaction fails, control transfers to the exception handler, where you roll back to the savepoint
undoing any changes, then try to fix the problem.
In Example 10-13, the INSERT statement might raise an exception because of a duplicate value in a unique
column. In that case, we change the value that needs to be unique and continue with the next loop iteration. If
the INSERT succeeds, we exit from the loop immediately. With this technique, you should use
a FOR or WHILE loop to limit the number of attempts.
Example 10-13 Retrying a Transaction After an Exception
CREATE TABLE results ( res_name VARCHAR(20), res_answer VARCHAR2(3) );
CREATE UNIQUE INDEX res_name_ix ON results (res_name);
INSERT INTO results VALUES ('SMYTHE', 'YES');
INSERT INTO results VALUES ('JONES', 'NO');
DECLARE
name VARCHAR2(20) := 'SMYTHE';
answer VARCHAR2(3) := 'NO';
suffix NUMBER := 1;
BEGIN
FOR i IN 1..5 LOOP -- try 5 times
PL/SQL warning messages are divided into categories, so that you can suppress or display groups of similar
warnings during compilation. The categories are:
SEVERE: Messages for conditions that might cause unexpected behavior or wrong results, such as aliasing
problems with parameters.
PERFORMANCE: Messages for conditions that might cause performance problems, such as passing
a VARCHAR2 value to aNUMBER column in an INSERT statement.
INFORMATIONAL: Messages for conditions that do not have an effect on performance or correctness, but
that you might want to change to make the code more maintainable, such as unreachable code that can never
be executed.
The keyword All is a shorthand way to refer to all warning messages.
You can also treat particular messages as errors instead of warnings. For example, if you know that the
warning message PLW-05003represents a serious problem in your code, including 'ERROR:05003' in
the PLSQL_WARNINGS setting makes that condition trigger an error message (PLS_05003) instead of a
warning message. An error message causes the compilation to fail.
Controlling PL/SQL Warning Messages
To let the database issue warning messages during PL/SQL compilation, you set the initialization
parameter PLSQL_WARNINGS. You can enable and disable entire categories of warnings
(ALL, SEVERE, INFORMATIONAL, PERFORMANCE), enable and disable specific message numbers, and make
the database treat certain warnings as compilation errors so that those conditions must be corrected.
This parameter can be set at the system level or the session level. You can also set it for a single compilation
by including it as part of the ALTER PROCEDURE ... COMPILE statement. You might turn on all warnings
during development, turn off all warnings when deploying for production, or turn on some warnings when
working on a particular subprogram where you are concerned with some aspect, such as unnecessary code or
performance.
Example 10-15 Controlling the Display of PL/SQL Warnings
-- To focus on one aspect
ALTER SESSION SET PLSQL_WARNINGS='ENABLE:PERFORMANCE';
-- Recompile with extra checking
ALTER PROCEDURE loc_var COMPILE PLSQL_WARNINGS='ENABLE:PERFORMANCE'
REUSE SETTINGS;
-- To turn off all warnings
ALTER SESSION SET PLSQL_WARNINGS='DISABLE:ALL';
-- Display 'severe' warnings, don't want 'performance' warnings, and
-- want PLW-06002 warnings to produce errors that halt compilation
ALTER SESSION SET PLSQL_WARNINGS='ENABLE:SEVERE', 'DISABLE:PERFORMANCE',
'ERROR:06002';
-- For debugging during development
ALTER SESSION SET PLSQL_WARNINGS='ENABLE:ALL';
Warning messages can be issued during compilation of PL/SQL subprograms; anonymous blocks do not
produce any warnings.
The settings for the PLSQL_WARNINGS parameter are stored along with each compiled subprogram. If you
recompile the subprogram with a CREATE OR REPLACE statement, the current settings for that session are
used. If you recompile the subprogram with an ALTER ...COMPILE statement, the current session setting
might be used, or the original setting that was stored with the subprogram, depending on whether you
include the REUSE SETTINGS clause in the statement. For more information,
see ALTER FUNCTION, ALTER PACKAGE, andALTER PROCEDURE in Oracle Database SQL Reference.
To see any warnings generated during compilation, you use the SQL*Plus SHOW ERRORS command or query
the USER_ERRORS data dictionary view. PL/SQL warning messages all use the prefix PLW.
Add a comment
76.
77.
APR
19
What are the different types of pragma and where can we use them?
===========================================
=============================
What are the different types of pragma and where can we use them?
Pragma is a keyword in Oracle PL/SQL that is used to provide an instruction to the compiler.
The syntax for pragmas are as follows
PRAMA
The instruction is a statement that provides some instructions to the compiler.
Pragmas are defined in the declarative section in PL/SQL.
The following pragmas are available:
AUTONOMOUS_TRANSACTION:
Prior to Oracle 8.1, each Oracle session in PL/SQL could have at most one active transaction at a given time. In
other words, changes were all or nothing. Oracle8i PL/SQL addresses that short comings with the
AUTONOMOUS_TRANSACTION pragma. This pragma can perform an autonomous transaction within a
PL/SQL block between a BEGIN and END statement without affecting the entire transaction. For instance, if
rollback or commit needs to take place within the block without effective the transaction outside the block,
this type of pragma can be used.
EXCEPTION_INIT:
The most commonly used pragma, this is used to bind a user defined exception to a particular error number.
For example:
Declare
I_GIVE_UP EXCEPTION;
PRAGMA EXCEPTION_INIT(I_give_up, -20000);
BEGIN
..
EXCEPTION WHEN I_GIVE_UP
do something..
END;
RESTRICT_REFERENCES:
Defines the purity level of a packaged program. This is not required starting with Oracle8i.
Prior to Oracle8i if you were to invoke a function within a package specification from a SQL statement, you
would have to provide a RESTRICT_REFERENCE directive to the PL/SQL engine for that function.
map specific error numbers returned by raise_application_error to exceptions of its own, as the following
Pro*C example shows:
EXEC SQL EXECUTE
/* Execute embedded PL/SQL block using host
variables v_emp_id and v_amount, which were
assigned values in the host environment. */
DECLARE
null_salary EXCEPTION;
/* Map error number returned by raise_application_error
to user-defined exception. */
PRAGMA EXCEPTION_INIT(null_salary, -20101);
BEGIN
raise_salary(:v_emp_id, :v_amount);
EXCEPTION
WHEN null_salary THEN
INSERT INTO emp_audit VALUES (:v_emp_id, ...);
END;
END-EXEC;
This technique allows the calling application to handle error conditions in specific exception handlers.
Redeclaring Predefined Exceptions
Remember, PL/SQL declares predefined exceptions globally in package STANDARD, so you need not declare
them yourself. Redeclaring predefined exceptions is error prone because your local declaration overrides the
global declaration. For example, if you declare an exception named invalid_number and then PL/SQL raises
the predefined exception INVALID_NUMBER internally, a handler written forINVALID_NUMBER will not
catch the internal exception. In such cases, you must use dot notation to specify the predefined exception, as
follows:
EXCEPTION
WHEN invalid_number OR STANDARD.INVALID_NUMBER THEN
-- handle the error
END;
===========================================
=============================
Posted 19th April 2011 by Prafull Dangore
0
Add a comment
78.
APR
13
TIMESTAMP datatype
One of the main problems with the DATE datatype was its' inability to be granular enough to determine
which event might have happened first in relation to another event. Oracle has expanded on the DATE
datatype and has given us the TIMESTAMP datatype which stores all the information that the DATE datatype
stores, but also includes fractional seconds. If you want to convert a DATE datatype to a TIMESTAMP datatype
format, just use the CAST function as I do in Listing C. As you can see, there is a fractional seconds part of
'.000000' on the end of this conversion. This is only because when converting from the DATE datatype that
does not have the fractional seconds it defaults to zeros and the display is defaulted to the default timestamp
format (NLS_TIMESTAMP_FORMAT). If you are moving a DATE datatype column from one table to a
TIMESTAMP datatype column of another table, all you need to do is a straight INSERTSELECT FROM and
Oracle will do the conversion for you. Look at Listing D for a formatting of the new TIMESTAMP datatype
where everything is the same as formatting the DATE datatype as we did in Listing A. Beware while the
TO_CHAR function works with both datatypes, the TRUNC function will not work with a datatype of
TIMESTAMP. This is a clear indication that the use of TIMESTAMP datatype should explicitly be used for date
and times where a difference in time is of utmost importance, such that Oracle won't even let you compare
like values. If you wanted to show the fractional seconds within a TIMESTAMP datatype, look at Listing E. In
Listing E, we are only showing 3 place holders for the fractional seconds.
LISTING C:
Convert DATE datatype to TIMESTAMP datatype
SQL> SELECT CAST(date1 AS TIMESTAMP) "Date" FROM t;
Date
----------------------------------------------------20-JUN-03 04.55.14.000000 PM
26-JUN-03 11.16.36.000000 AM
LISTING D:
Formatting of the TIMESTAMP datatype
1 SELECT TO_CHAR(time1,'MM/DD/YYYY HH24:MI:SS') "Date" FROM date_table
Date
------------------06/20/2003 16:55:14
06/26/2003 11:16:36
LISTING E:
Formatting of the TIMESTAMP datatype with fractional seconds
1 SELECT TO_CHAR(time1,'MM/DD/YYYY HH24:MI:SS:FF3') "Date" FROM date_table
Date
----------------------06/20/2003 16:55:14:000
06/26/2003 11:16:36:000
Calculating the time difference between two TIMESTAMP datatypesdatatype. Look at what happens when you
just do straight subtraction of the columns in Listing F. As you can see, the results are much easier to
recognize, 17days, 18hours, 27minutes, and 43seconds for the first row of output. This means no more
worries about how many seconds in a day and all those cumbersome calculations. And therefore the
calculations for getting the weeks, days, hours, minutes, and seconds becomes a matter of picking out the
number by using the SUBSTR function as can be seen in Listing G.
LISTING F:
Straight subtraction of two TIMESTAMP datatypes
1 SELECT time1, time2, (time2-time1)
2* FROM date_table
TIME1
TIME2
(TIME2-TIME1)
------------------------------ ---------------------------- ---------------------06/20/2003:16:55:14:000000 07/08/2003:11:22:57:000000 +000000017 18:27:43.000000
06/26/2003:11:16:36:000000 07/08/2003:11:22:57:000000 +000000012 00:06:21.000000
LISTING G:
Determine the interval breakdown between two dates for a TIMESTAMP datatype
1 SELECT time1,
2
time2,
3
substr((time2-time1),instr((time2-time1),' ')+7,2)
seconds,
4
substr((time2-time1),instr((time2-time1),' ')+4,2)
minutes,
5
substr((time2-time1),instr((time2-time1),' ')+1,2)
hours,
6
trunc(to_number(substr((time2-time1),1,instr(time2-time1,' ')))) days,
7
trunc(to_number(substr((time2-time1),1,instr(time2-time1,' ')))/7) weeks
8* FROM date_table
TIME1
TIME2
SECONDS MINUTES HOURS DAYS WEEKS
------------------------- -------------------------- ------- ------- ----- ---- ----06/20/2003:16:55:14:000000 07/08/2003:11:22:57:000000 43 27 18 17 2
06/26/2003:11:16:36:000000 07/08/2003:11:22:57:000000 21 06 00 12 1
System Date and Time
In order to get the system date and time returned in a DATE datatype, you can use the SYSDATE function such
as :
SQL> SELECT SYSDATE FROM DUAL;
In order to get the system date and time returned in a TIMESTAMP datatype, you can use the SYSTIMESTAMP
function such as:
SQL> SELECT SYSTIMESTAMP FROM DUAL;
You can set the initialization parameter FIXED_DATE to return a constant value for what is returned from the
SYSDATE function. This is a great tool for testing date and time sensitive code. Just beware that this
parameter has no effect on the SYSTIMESTAMP function. This can be seen in Listing H.
LISTING H:
Setting FIXED_DATE and effects on SYSDATE and SYSTIMESTAMP
SQL> ALTER SYSTEM SET fixed_date = '2003-01-01-10:00:00';
System altered.
SQL> select sysdate from dual;
SYSDATE
--------01-JAN-03
SQL> select systimestamp from dual;
SYSTIMESTAMP
--------------------------------------------------------09-JUL-03 11.05.02.519000 AM -06:00
When working with date and time, the options are clear. You have at your disposal the DATE and TIMESTAMP
datatypes. Just be aware, while there are similarities, there are also differences that could create havoc if you
try to convert to the more powerful TIMESTAMP datatype. Each of the two has strengths in simplicity and
granularity. Choose wisely.
===========================================
=======================
Posted 13th April 2011 by Prafull Dangore
0
Add a comment
79.
APR
13
===========================================================================
Add a comment
80.
81.
APR
13
We often come across situations where Data Transformation Manager (DTM) takes more time to read from Source or
when writing in to a Target. Following standards/guidelines can improve the overall performance.
Use Source Qualifier if the Source tables reside in the same schema
Make use of Source Qualifer Filter Properties if the Source type is Relational.
If the subsequent sessions are doing lookup on the same table, use persistent cache in the first session. Data remains in
the Cache and available for the subsequent session for usage.
Use flags as integer, as the integer comparison is faster than the string comparison.
Use tables with lesser number of records as master table for joins.
While reading from Flat files, define the appropriate data type instead of reading as String and converting.
Have all Ports that are required connected to Subsequent Transformations else check whether we can remove these ports
Suppress ORDER BY using the at the end of the query in Lookup Transformations
Minimize the number of Update strategies.
Group by simple columns in transformations like Aggregate, Source Qualifier
Use Router transformation in place of multiple Filter transformations.
Turn off the Verbose Logging while moving the mappings to UAT/Production environment.
For large volume of data drop index before loading and recreate indexes after load.
For large of volume of records Use Bulk load Increase the commit interval to a higher value large volume of data
Set Commit on Target in the sessions
===========================================================================
Add a comment
82.
APR
13
The process of pushing transformation logic to the source or target database by Informatica Integration service is known
as Pushdown Optimization. When a session is configured to run for Pushdown Optimization, the Integration Service
translates the transformation logic into SQL queries and sends the SQL queries to the database. The Source or Target
Database executes the SQL queries to process the transformations.
How does Pushdown Optimization (PO) Works?
The Integration Service generates SQL statements when native database driver is used. In case of ODBC drivers, the
Integration Service cannot detect the database type and generates ANSI SQL. The Integration Service can usually push
more transformation logic to a database if a native driver is used, instead of an ODBC driver.
For any SQL Override, Integration service creates a view (PM_*) in the database while executing the session task and
drops the view after the task gets complete. Similarly it also create sequences (PM_*) in the database.
Database schema (SQ Connection, LKP connection), should have the Create View / Create Sequence Privilege, else the
session will fail.
Few Benefits in using PO
There is no memory or disk space required to manage the cache in the Informatica server for Aggregator, Lookup, Sorter
and Joiner Transformation, as the transformation logic is pushed to database.
SQL Generated by Informatica Integration service can be viewed before running the session through Optimizer viewer,
making easier to debug.
When inserting into Targets, Integration Service do row by row processing using bind variable (only soft parse only
processing time, no parsing time). But In case of Pushdown Optimization, the statement will be executed once.
Without Using Pushdown optimization:
INSERT INTO EMPLOYEES(ID_EMPLOYEE, EMPLOYEE_ID, FIRST_NAME, LAST_NAME, EMAIL,
PHONE_NUMBER, HIRE_DATE, JOB_ID, SALARY, COMMISSION_PCT,
MANAGER_ID,MANAGER_NAME,
DEPARTMENT_ID) VALUES (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13) executes 7012352 times
With Using Pushdown optimization
INSERT INTO EMPLOYEES(ID_EMPLOYEE, EMPLOYEE_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER,
HIRE_DATE, JOB_ID, SALARY, COMMISSION_PCT, MANAGER_ID, MANAGER_NAME, DEPARTMENT_ID) SELECT
CAST(PM_SJEAIJTJRNWT45X3OO5ZZLJYJRY.NEXTVAL
AS
NUMBER(15,
2)),
EMPLOYEES_SRC.EMPLOYEE_ID,
EMPLOYEES_SRC.FIRST_NAME, EMPLOYEES_SRC.LAST_NAME, CAST((EMPLOYEES_SRC.EMAIL || @gmail.com) AS
VARCHAR2(25)),
EMPLOYEES_SRC.PHONE_NUMBER,
CAST(EMPLOYEES_SRC.HIRE_DATE
AS
date),
EMPLOYEES_SRC.JOB_ID,
EMPLOYEES_SRC.SALARY,
EMPLOYEES_SRC.COMMISSION_PCT,
EMPLOYEES_SRC.MANAGER_ID, NULL, EMPLOYEES_SRC.DEPARTMENT_ID FROM (EMPLOYEES_SRC LEFT OUTER JOIN
EMPLOYEES PM_Alkp_emp_mgr_1 ON (PM_Alkp_emp_mgr_1.EMPLOYEE_ID = EMPLOYEES_SRC.MANAGER_ID)) WHERE
((EMPLOYEES_SRC.MANAGER_ID
=
(SELECT
PM_Alkp_emp_mgr_1.EMPLOYEE_ID
FROM
EMPLOYEES
PM_Alkp_emp_mgr_1 WHERE (PM_Alkp_emp_mgr_1.EMPLOYEE_ID = EMPLOYEES_SRC.MANAGER_ID))) OR (0=0))
executes 1 time
Things to note when using PO
There are cases where the Integration Service and Pushdown Optimization can produce different result sets for the same
transformation logic. This can happen during data type conversion, handling null values, case sensitivity, sequence
generation, and sorting of data.
The database and Integration Service produce different output when the following settings and conversions are different:
Nulls treated as the highest or lowest value: While sorting the data, the Integration Service can treat null values as
lowest, but database treats null values as the highest value in the sort order.
SYSDATE built-in variable: Built-in Variable SYSDATE in the Integration Service returns the current date and time for
the node running the service process. However, in the database, the SYSDATE returns the current date and time for the
machine hosting the database. If the time zone of the machine hosting the database is not the same as the time zone of the
machine running the Integration Service process, the results can vary.
Date Conversion: The Integration Service converts all dates before pushing transformations to the database and if the
format is not supported by the database, the session fails.
Logging: When the Integration Service pushes transformation logic to the database, it cannot trace all the events that
occur inside the database server. The statistics the Integration Service can trace depend on the type of pushdown
optimization. When the Integration Service runs a session configured for full pushdown optimization and an error occurs,
the database handles the errors. When the database handles errors, the Integration Service does not write reject rows to
the reject file.
=================================================================================
Add a comment
83.
APR
13
Informatica OPB table which have gives source table and the
mappings and folders using an sql query
SQL query
select OPB_SUBJECT.SUBJ_NAME, OPB_MAPPING.MAPPING_NAME,OPB_SRC.source_name
from opb_mapping, opb_subject, opb_src, opb_widget_inst
where opb_subject.SUBJ_ID = opb_mapping.SUBJECT_ID
and OPB_MAPPING.MAPPING_ID = OPB_WIDGET_INST.MAPPING_ID
and OPB_WIDGET_Inst.WIDGET_ID = OPB_SRC.SRC_ID
and OPB_widget_inst.widget_type=1;
Posted 13th April 2011 by Prafull Dangore
0
Add a comment
84.
85.
MAR
28
#PC766#
#PC921#
#PC1020
#PC1071
#PC1092
#PC1221
i want to remove that special characters....
i want to load in the target just
Prod_Code
--PC9
PC98
PC99
PC125
PC156 .
Ans:
In expression ,use the replacechar function and in that just replace # with null char.
REPLACECHR
Availability:
Designer
Workflow Manager
Replaces characters in a string with a single character or no character. REPLACECHR searches the input
string for the characters you specify and replaces all occurrences of all characters with the new
character you specify.
Syntax
REPLACECHR( CaseFlag, InputString, OldCharSet, NewChar )
OldCharSet Required
NewChar
Required
Return Value
String.
Empty string if REPLACECHR removes all characters in InputString.
NULL if InputString is NULL.
InputString if OldCharSet is NULL or empty.
Examples
The following expression removes the double quotes from web log data for each row in the WEBLOG
port:
REPLACECHR( 0, WEBLOG, '"', NULL )
WEBLOG
RETURN VALUE
The following expression removes multiple characters for each row in the WEBLOG port:
REPLACECHR ( 1, WEBLOG, ']["', NULL )
WEBLOG
RETURN VALUE
[29/Oct/2001:14:13:50 -0700]
29/Oct/2001:14:13:50 -0700
NULL
NULL
The following expression changes part of the value of the customer code for each row in the
CUSTOMER_CODE port:
REPLACECHR ( 1, CUSTOMER_CODE, 'A', 'M' )
CUSTOMER_CODE RETURN
VALUE
ABA
abA
MBM
abM
BBC
BBC
ACC
NULL
MCC
NULL
The following expression changes part of the value of the customer code for each row in the
CUSTOMER_CODE port:
REPLACECHR ( 0, CUSTOMER_CODE, 'A', 'M' )
CUSTOMER_CODE RETURN
VALUE
ABA
abA
MBM
MbM
BBC
ACC
BBC
MCC
The following expression changes part of the value of the customer code for each row in the
CUSTOMER_CODE port:
REPLACECHR ( 1, CUSTOMER_CODE, 'A', NULL )
BBC
ACC
BBC
CC
AAA
aaa
aaa
NULL
NULL
[empty string]
The following expression removes multiple numbers for each row in the INPUT port:
REPLACECHR ( 1, INPUT, '14', NULL )
INPUT
RETURN
VALUE
12345
235
4141
111115
NULL
5
NULL
NULL
When you want to use a single quote (') in either OldCharSet or NewChar, you must use the CHR
function. The single quote is the only character that cannot be used inside a string literal.
The following expression removes multiple characters, including the single quote, for each row in the
INPUT port:
REPLACECHR (1, INPUT, CHR(39), NULL )
INPUT
RETURN VALUE
NULL
Add a comment
86.
MAR
25
Delta checks can be done in a number of ways. Different logics can accomplish this. One
way is to check if the record exists or not by doing a lookup on the keys. Then if the Keys
don't exist then it should be inserted as new records and if the record exist then compare
the Hash value of non key attributes of the table which are candidates for change. If the
Hash values are different then they are updated records. (For Hash Values you can use MD5
function in Informatica) If you are keeping History (Full History) for the table then it adds a
little more complexity in the sense that you have to update the old record and insert a new
record for changed data. This can also be done with 2 separate tables with one as current
version and another as History version.
Posted 25th March 2011 by Prafull Dangore
0
Add a comment
87.
MAR
25
Definition:
Surrogate key is a substitution for the natural primary key in Data Warehousing.
It is just a unique identifier or number for each row that can be used for the primary key to the table.
The only requirement for a surrogate primary key is that it is unique for each row in the table.
It is useful because the natural primary key can change and this makes updates more difficult.
Surrogated keys are always integer or numeric.
Add a comment
88.
Loading
Send feedback