Vous êtes sur la page 1sur 27

R/3 data extraction

R/3 system contains many modules like

SD
MM
FICO
HR
PP

We extract some particular data from the r/3 not all.

In r/3 extraction unlike flat file , data source is created at the r/3 side and
then its getting replicated to the bw side.

Datasource = (Extract structure + Transfer structure) of source system


+
Transfer strcture of BI (PSA)

for flat file we will have for datasource : transfer structure of BI(psa)

EXTRACT Structure : Grouping of logically related fields in which format data has
to be extracted to BI from the source system.
TRANFER strucutre of Source system: Grouping of logically realted fields in which
format data has to be transferred to BI system.
Extract strcture - hide fields = Transfer structure of
source system.

Only step of creating the datasource is done in the r/3 system , rest all are done
at the BI side.

Step 1: we will login to r/3.


Step 2: will declare the datasource (done in r/3) , while declaring the new
datasource we give the table name here assume VBAK table.
VBAK table has 100 fields.

Step 3: Once data source is created an extract structure (100 fields) will be
created.

Step 4: I will hide 10 fields in the extract structure and will replicate it( will
be done in BI system)
This replication will create transfer structure in the BI.
with 90 fields.

Step 5: Activate the transfer strcture. This activation will lead to creation of
trasnfer strcutre at the source system level.

Transfer strcture ( BI side i.e PSA)

Transfer strcture

Extract structure

VBAK
Step 6: Next step is create the data target or use the BI content to activate the
data target required.

============================================
TYPES OF r/3 extraction
============================================

They are categorized based on how we define the datasource.

1) Generic extraction

A) Based on table
R/3 has around 4,50,000 tables.
Its based on any one of the table. Here you cannot use more than one
table.
The table can be customer defined or sap provided.
ex: VBAK etc.

B) Based on view

View is a temporary table.


Why we need to extract from view.
SAP r/3 is normalized, means data is split across multiple tables where
as BI is denormalized strcture.

If we want to extract sales header and items data , these data are
stored in different tables at the r/3 side.
VBAK and VBAP.

In that case if we want to extract the data from more than one table,
we go for view.
To join tables there should be common field( primary key) if not its
not possible to join.

All event related tables will have a primary field. I mean to say if we
are going for sales all sales table will have
common field.

Production planning is one more event but sales and production tables
may not have common fields which makes joining them
impossible.

C) Based on function module

Based on function module means using an abap program.


This writing for program will be done by abaper or we ourselves can
write if we know abap.

Why this requirement. Assume we are extracting sales order related


data.
There are two tables for that VBAK and VBAP , but order status is
stored in VBUK.

My requirement is to extract from VBAK and VBAP where order status is


partially deliverd.
You can say use view but if i use view and if i combine 3 tables i will
get all the records, where as my requirement is
to extract the data from 2 tables but based on that condition.

This condition will be given by using abap code.

Def:
Whenever we want to extract the data by writing some abap code(by
implementing logic) we use function module.

Generic is called as a custom defined extractor.

Generic is a pull type of extraction.When we start triggering the


infopackage we are directly accessing the real source system
tables which we degrade the performance.

2) LIS extraction (Outdated , even in bw 3.5 they are not using)

LIS stands for Logistics information structure

Using LIS we can extract entire logistics related data.


Logistics means movement of material.
ex: sales, production, materail management, quality management etc.

HR , FInanace doesnt come under logisitics.

In case of LIS, an one more extra table will created it is Information


strcture table.
If i want to extract 10 fields from VBAK, VBAP, VBRK, VBRP, i will take all
these fields into informations structure table.

We generate the datasource based on the information structure table i.e an


extract structure. This all happens at the r/3 side.

When we pull the data using info package the information structure table
doesn't has data to give it to BI.
For information structure table to get data we need to do the statistical
setup i.e if we do like that entire order related data
and delivery related data come and store in the Information structure table.

In LIS we can extract total event data ( sales, orders) not particular level

LIS is custom generated extractor.

LIS is a push type of extraction.

3) LO extraction

Logisitics extraction

Very very important.

In the case of LO extraction datasource is ready madely given. You need


create any datasource.

When we say its ready madely given as we know


datasource is a combination of (extract structure + transfer structure(ss) +
trasnfer structure(bi)
So here extract structure is ready madely given.

The link to the ready made extraction structure is given from something
called as setup tables.

Also setup tables are ready madely given.

LO is business content extractor.


LO is also a push type of extraction

4) CO-PA extraction

Controlling and profitability analysis.

Using this we can extract finance related data.

Finance people use what is called as operating concern.

Unlike other department, finance department has integrations with all the
departments.

Imagine operating concern as a cube.All the sales realted finance data ,


purchase related finance data , everything comes and stores in
what is called as operating concern.

Its the top most legal entity.

Operating concern contains 4 tables.


When the finance people create operating concern 4 tables are generated at
the database.

All the finance data comes and stores in operating concern.

We are going to generate the datasource on operating concern.

COPA is a custom generated extractors. Its a pull type of extraction.

==================================================================================
r/3

R/3 is a ready made application.

R/3 has ready made tables and fields.R/3 comes with almost 4 and half lakh
ready made tables
ECC 6 is the later version of r/3.

We BI consultants will be given the document from which data has to be


extracted.

There is something called as BI design team, we join usually as a consultant.


Bi design team will have a 6-8 years of exp.They are desing consultants.
They are the one who gathers information from customers means reporting
information, based on that they create documents by taking
help of functional team and thier experience.

Later this document will be provided to consultants for development.

KNA1 : customer master data table : 0CUSTOMER ( ready made infoojbect which
we can use in BI side whenever we want to extract the data from
the KNA1 table)

Here customer number name address etc are the fields.

How this gets updated is it is done by the end user. May be by BPO
employees.

To know whether a table is master table or transaction table click on


technical settings.

MARA : material master table : 0Materail


LFA1 : vendor master data (the one who supplies the raw material) : 0Vendor

Transaction tables
------------------

VBAK : Sales order header data. Primary key is VBELN.


VBAP : Sales order item data.
VBEP : sales order schedule line.
VBUK : sales order header status.
VBUP : sales order item status.
VBRK : Billing item data
EKKO : Purchase order header
EKPO : Purchase order item

All sales table start with VB

==========================================================================

Generic extraction using table:

we can use any table. we will go with VBAK.

The extraction can be done using LO.

Usually u use generic when the ready made tables does not satisify the
customer requirement.

For this we need to create the datasource at r/3

We look at how to logon to r/3 system remotely.

Go to rsa1 --> modelling --> source systems


Select the interface between BI and R/3.
This connection will be created by basis consultant. The connection
between BI and r/3 can be found below the BI link.

Right click on the found r/3 connection and click on customizing


extractors.
It will take you to the Tcode SBIW.

Step 1: Create a datasource (in r/3 system)

go to sbiw --> generic datasources --> click on maintain generic


datasources

or

Direct tcode is RSO2

select tranaction data and give datasource name.Give name which starts
with Y or Z.
ex: YDAS_SOH ( sales order header) and click on create.

Step 2: Give the application component.


Here its sales so select SD.

Step 3: Give description


Short, medium and long all are mandatory.

Step 4: Select from which

Extraction from View


Extraction from query
Extraction from function module

when you click on function module other options will be


disabled.same with when u click on view.

Select from where u are extracting.

Step 5: select option as extraction from view and give the table name VBAK.

Step 6: Click now on generic delta.

Whether we are extracting for the first time or second time data is
coming from the VBAK table i.e the source table.
Here in the source table records with versions are not maintained like
modified,deleted etc. So we should always update our
data to a DSO from there to the cube.

Why we need to set a pointer.


ex: at 9:00 we have 1000 records in VBAK table.
when we do extraction we get 1000 recrods at bw side.

at 10:00 we might have new transaction entries at vbak and


total entries are 20.
so total entries now at the table is 1000 + 20 =1020
If we want to extract only those 20 records i.e delta records
we need to have a pointer to know that this much
records were already extracte and this are new records.

There are 3 options to maintain the pointer.

Generic delta: what kind of pointer has to be maintained.

a) calendar day

if i select this option. it will put a pointer


at the table level
20131215.

So next extraction is after this date whatever


the data has been added.

There is problem the thing is u can update only


once a day, also u will miss the transactions
happened till midnight, as day changes only in
the midnight.

using this we can extract the delta once a day


that too at the end of the day

b) Time stamp

It will add hh:mm:ss

This will allow you to extract evey half an


hour or 1 hour.

If i trigger at 9:00 o clock it will put a


reference at the table VBAK like 20131215 09:00:00

If we extract the data at 10:00 then it will


extract the new/modified records after 9 till 10
and it will change the pointer on the table
itself from 09:00 to 10:00:00 ( no field will be stored
as a timestamp in the table)

c) numeric pointer

When u use this, we need a counter.


ex: order number (this itself is the
counter) it will extract 1 to 1000.

the problem with this we can only add newly


added records but not changed records.
If an order 289 data changes the order number
doesnt change so its not taken as new record.

We can use any counter. the counter will be a


number.
Safety interval upper limit

Assume that last delta was till 9 and next will


be at 10
While extraction is taking place assume that
some records is added to the system.
These records will be missed in the next
extraction
If u give safety interval of 30 mins then at
the time of second extraction, it will extract
as usual from 10 to 11 and also will include 9
to 9:30 records if were added.

Upper limit means from 9 to 9:30 and lower


limit means from 9:30 to 10:00

The chances of missing the records is more at


the upper limit then in the lower limit.

If its numeric pointer it will be in the form


of number of records
If its time stamp it will be in seconds , the
standard values 1800 seconds i.e 30 mins
If its calendar day it will be in the number of
days.

Safety interval lower limit

Give values and save it

Step 7:

Once we save the extract strcutre will be created.


A package is asked when we try to save it and as it done in r/3,
but in BI side we were not asked for the option of
package.

Extract structure = same fields as of table = i.e. VBAK

We can now see the list of fields with two options

A) Selection
This selection is not for your transfer structure.
This selection is for selective loading.
In infopackage there is tab called as data
selection,that is where this will help.
By default if u dont select any field at the
datasource level u will not have any option of giving values
at the infopackage level.If you do selection
then u will get the option of giving values.

In flat file u do that selection at the BI side


itself where u have to create the datasource
In R/3 the selection u have to do at the R/3 side as
u create the datasource there.

B) Hide fields
Extract structure - hide fields = trasnfer structure.

C) Inversion
Inversion will be enabled only for key figures.
The value of before image will be multiped by -1 when
u select this check box for the key figure.
WHen this gets mulitplied is when u transfer the data
froom r/3 to BI.

If u dont select this before image then the value


will get added up in the BI side.
If u can notice here for generic extraction inversion
is not enabled, it will only be enabled for the
image based transfer like Additive image etc.

D) Field only
This is used for coding person like in customer exit.

once the settings are done save the datasource

NOW COMES STEP 2 : from step 2 we do at bw side

Now replicate the datasource at the BI side.


RSA1 -> modelling tab -> datasource level -> In the right from
select source system connection between BI and R/3
It will list all the datasources. Expand ur application
compoenent. Its SD as we have created using the table VBAK.

Now u will not find your datasource over here as its not
replicated yet.
So right click on the application component and click on
replicate metadata.

Is replication one time task or continous task. Ans is its one


time task, until and unless you dont change ur datasource
at the r/3 level.

Now it will ask with a pop up saying


Datasource does not exist in BI system. How do you want to
replicate it

A) as datasource
This is for 7.0

B) as 3.x datasource
If i give this option then i will not be able
to use the datasource without migrating it to 7.0
You can also see that this datasource looks
different from 7.0 datasources.It will have a small
box in front of it. This datasource is
also called as emulated datasource.
Means it can be used only for older
versions.
If it is emulated datasource u have to perform
migration.
Steps to migrate it are very simple. Just
select the datasource right click and click on migrate.

It will ask an option


With export or
if u want to remigrate it back to
3.5 use this option
A entry will be maintained in a
table called RSDSEXPORT with timestamp.
If the same datasource has to be
re-migrated to 3.5 then tcode used is RSDS.
In RSDS give datsource name
and source system name and in the menu tab
go to datasource --> click on recovery 3.x ds , if u say
yes then it will
be recovered means it will
again go back to his intial mode i.e 3.5x

W/O export
this is used if u dont want to make
it get migrated back to 3.5

C) this and following 1 as datasource


D) this and following 1 as 3.x datasource

ONE IMPORTANT POINT WHETHER U ARE EXTRACTING TO 3.5 or 7 SYSTEM


WHATEVER U DO AT THE R/3 SIDE IS THE SAME.

Step 3: activate the datasource

Once we replicate, transfer structure of BI is formed and once we


activate transfer structure of source system will be formed.

Step 4: Install the BI content -- content will be added how to do it later

Go to BI content tab
First select grouping options
Then collection mode

Step 5: Create transformation between datasource and DSO 0orders


Do the drag and drop to match the fields. You might encounter an issue
of which one to map to which one.
See the description of the fields it will be easier.

Other way instead of drag and drop is click on the infoboject which is
at the right side of the transformation and then
click on add fields and u will get a window with the list of all
fields. select the required fields and click on transfer values.
Automatically line will be drawn from the infoojbect to the field.

Real time map to all the objects.

Step 6:

Create the info package.


In the infopackage for the update mode we have different options

Full update
Initialize delta process
Initialization with data transfer
We will get the data and also time stamp will be
maintained at the table level
Init simulation without data transfer
No data will be trasnferred only time stamp will be
maintained
Early delta initialization

Real time we use Initialzation with data transfer for the first time.
First time when we run we will not see the option of
delta update. Only when we run the infopackage with above option and in
the second time we get the option of delta update.

Again if u go with intialization with data transfer again all data will
be loaded hehe.

Tcode to check time stamp is RSA7 -- obviosouly in the source system r/3
Find your pointer and click on the status symbol it will show the current
status i.e day.month.year hh.mm.ss
That is this was the last time data was extracted from the source system.

========================== LO EXTRACTION
=================================================================

LO stands for logisitcs

Using LO we can extract entire logistics related data i.e sales, production,
inventory i.e where all materail moves.

Finance and HR will not come under logistics

IN THE CASE OF LO EXTRACTION DATASOURCE IS READYMADELY GIVEN


So u need to create or generate datasource
When we say datasource is ready madely given it means extract structure is
ready madely given.
As we know the datasource = Extract structure + transfer structure of s.s +
transfer structure of BI System

The link to ready made extract structure is given from setup tables.

Setup tables are also readymadely given. They are application specific
tables. For sales one setup table will be there and for
production one setup table will be there.

Setup tables will have 3 partitions, header part , item part and schedule
line, based on what data is required choose the datasource.

The link is like this

VBAK -->
Setup tables --> Extract structure
VBAP -->

When we trigger the info package the data has to be extracted from the setup
tables , but will there be data in the setup tables ? no
Data has to be pushed from the base tables (VBAK , VBAP etc) into setup
tables this is called as staistical setup.

Naming convetions of the ready made given objects

Data source:
Any LO datasource starts with
2LIS_<APPLICATION COMPONENT NUMBER>_<events><HDR>(each
event will have levels like HRD , ITM , SCL)
2LIS_<APPLICATION COMPONENT NUMBER>_<events><ITM>
2LIS_<APPLICATION COMPONENT NUMBER>_<events><SCL>

*Application compoenent number --> Means for each application


like sales, production , order a particular number will be
given.
*each appln will have different events . Like under sales like
orders, delivery etc

Ex: 2LIS_11_VAHDR MC11VA0HDR


2LIS_11_VAITM MC11VA0ITM

Extract structure:
Any extract structure starts with MC<Applcn component
number><event>0<HDR or ITM or SCL>
Ex: MC11VA0HDR

Setuptables:
<Name of E.S>SETUP
Ex: MC11VA0HDRSETUP

STEPS TO CONFIGURE FOR YOUR LO EXTRACTION

STEP 1: To be done in R/3

Install business content datasources


It creates a copy from delivered version into active version.
Tcode to use is RSA5

STEP 2: To be done in R/3 system - LBWE


Logon on to application called LO Cockpit. LO Cockpit is an centralized
application(like RSA1)
where u can work with ur LO extraction.
Tcode is LBWE.
Why we have to login to LO cockpit?
Ans:

SITUATION 1:
Standard table assume has 100 fields and extract structure
given is having 10 fields.
I am not satisified with number of fields in the extract
strucutre and i want to add few more fields.
To do that I have to go with an option called maintaining
extract structure and that is done in LO COCKPIT.

SITUATION 2:
As we know that standard table fields are giving by sap
readymadely. If we feel that they are not enough
and we want to add few more fields to THE SAP GIVEN TABLE
ITSELF

Assume I am adding two more user defined fields (not given


by sap) into the table.

If u use the extensing fiedls in the sap provided table into


the extract structure u need to worry about data
the data will come automatically to the extract structure
even though u added few fields.

If u add few fields to table(user defined fields) and if u


use those fields in the extract structure the extract
structure will not get the data, so what u have to do is
enhance the datasource(extract structure is part of the d
datasource as we know) using ABAP code, this is what is
called as datasource enhancement.

A) First make the datasource as inactive so that enhancement can be


done
B) Maintain the extract structure
Add the fields to your readymade extract strucutre.

C) Specify the update mode


Update mode defines where the LUW's has to be udpated.This means
where the delta has to come from.
First time we are doing inital update i.e from the setup table.

The complete flow.


We have 3 tables VBAK, VBAP and VBEP these tables are
getting data when sales order gets created, modified.
We have setup tables, these setup tables have partitions
like header, item and schedule line.
Now we have an extract structure (ready made) linked to
setup tables.
When we trigger our infopackage(initial update) data will
be coming from setup tables.
We have something called as communication strucutre which
is given by sap and which is between base tables and
setup table.
When we do staistical update data goes from base table to
setuptable.

Let us after the initial update a sales order has been


added, that gets added to the live table.
NOW THE QUESTION COMES FROM there where it has to be pushed
i.e what is update mode

In generic intial or delta data was coming from based


tables, but here its not the case initial is from setuptables.

Update mode 3 diff options are there


A) Direct delta [DELTA QUEUE]
When first succesfull init load of the infopackage
happens a delta queue will be created at the r/3 side.
Whatever the records which gets created are uploaded
from the base table to the delta queue.

Now when u execute the infopackage the records are


moved(moved me cut and paste) from delta queue to bi side.
Now the delta queue is empty to receive next set of
delta records.

Tcode is RSA7.

The flow is from base table to delta queue to bi


side.
Until the delta queue is updated the BI side has to
wait , that degrades the peformance.
So we go with this method of updation only when we
have less amount of loads.

B) Queued delta [EXTRACTOR QUEUE] *****


This is the one which sap recommends to use.
If use this LUW's will be updated to extractor queue,
in the form of tokens.
Here also we run the V3 job here , but it is called a
collective run.
It will push all the records from extractor queue to
delta queue in one run.
Because of pushing collectively there is no problem of sorting sequence and
missing of records,
either it will get everything or nothing.
Whereas in the unserialized-v3 we were updating LUW1
and after that LUW2.

C) Un-serialized V3 [UPDATE QUEUE]

In this Luws will be updated into the update queue in


the form of tokens.
Not entrie record is updated into the update queue
only a pointe gets stored.
LUws will get updated into something called as update
queue from there to delta queue.
Update queue is like V2 only.
From update queue we run a background job called as a
V3 job,according to the pointer has
been updated the data will be pushed into the delta
queue in the background.

If we are loading into DSO we cannot use unserialized


v3.
Ex: assume a order value 4000 is entered and that
order value is pushed to setup table initially and
loaded to DSO. Next the order value is changed to
5000 and 6000. Now 2 tokens refers to these value
token1 will refer to 5000 and token2 to 6000. If
they are processed in order t2 and t1 then first
4000 will be replaced by 5000 as DSO
functionality is overwriting, next it will change from 5000 to
6000, but actually data at the source system
level is 6000 not 5000.

D) Serialized V3 update: (obselete from ECC 5)

Data to the database updates in 3 ways


1) V1 update (sychronous)
Data is updated and u will get a status saying data
has been saved.
Any data which gets updated to the live table is V1
type of update.
ex: like sms sending and u will get a delivery update

If you use V1 the problem is records has to update and


then u need to get the feedback which degrades the
processing performance.

2) V2 (Asychronous update)
Data gets update and u will not get any feedback.
Here the updation happens in the front end(not in the
background) so performance issue.

Base tables to Delta queue is through V2 update.

3) V3 (Asynchronous with background updation)


This is the most preferred way of updation.

D) Generate the datasource


E) Make the datasource as active

STEP 3: Data migration steps - Done in R/3 system

Is used to bring the data from database tables to setup tables.

A) As a safety measure delete the contents of the setup tables -


Tcode LBWG
B) Lock the related tcodes (VA01, VA02) - done by basis system
c) Run the statistical setup - Tcode SBIW (this brings the data
from database to setup tables)

STEP 4: Done in BI system

Replicate the datasource in BI

STEP 5: Done in BI system

Install BI content target

STEP 6: Done in BI system

Create transformations

STEP 7: Done in BI system

Create infopackage

STEP 8: Done in BI system

Create DTP

===================================================================================
===============

LO EXTRACTION PRACTICALS

Scenario over here is to loading to the cube which gives sales overview
which includes sales header and item
billing header and item
delivery header and item

STEP 1 : Go to RSA5 i.e installation of business content datasources

Select all the 6 datasources by clicking on the datasource and then


clicking on the select subtree button.
For each application component we wil have separate setup tables,
means
for sales -- sales setuptable [i.e for entire 11 one setup
table]
billing -- billing setuptable etc [i.e for entire 12 one
setup table]

Click on activate the datasources

STEP 2 : LBWE

Make the datasource as inactive - just click on active button it will


become inactive

Click on maintainence and select the fields for your extract strucutre
if needed.
After this you can see that extract strucutre status is red, because
it has been modified and not informed to the datasource.

Next step is update mode.Click on update mode options. Change it to


queued delta.

Next step is generating the datasource.Click on the datasource name,


it will take you to a new window with the list of
fields and u can see that extract strucutre is there.
Here THE MAIN THING TO SEE IS WHATEVER THE NEW FIELDS WHICH I HAD
ADDED TO THE EXTRACT STRUCTURE THEY ARE NOW
hidden(checked as hidden in the hide fields column)

Also by default the field only column will be ticked for those
enhanced fields, what SAP thinks is u will be adding some code
to those fields, u can uncheck it

Next comes the main thing inversion, by default for the key figures
the inversion will be enabled, and if u have added a new
key figures and if inversion is not enabled, click on unhide first and
then scroll up and scroll down then u see the check box
allowing you to check it.

This inversion is used mainly to multiply the value of before image


with -1.

FINAL STEP SAVE UR datasource, now u can see it has become RED to
Yellow.

There might arise a question of how to add the fields to the already
existing database table.That is done using append structure
and when u click on the extract structure enhancement, what u can find
is not the field names but the append structure name.
Then we write the customer exit to populate the values to those
fields.

STEP 3 : Data migration steps

Check whether the setup table has values.


SE11 --> setup table name and display and click on number of entries.
Still on the safety measure empty the setup table. go to LBWG --> give
application component number which is 11 , 12 and 13
here.

Lock the VA01 and VA02.Done by the basis system


Tcode is SM01.Select the tcodes that has to be locked and click on
lock button.

Run the statisitical setup -- SBIW


Setting for application specific datasource
Logisitics
Managing extract structures
Initialization
Filling the setup tables
Application specific setup of staistical data
SD-Sales-order-Perform setup -- CLick on execute
( this is for sales, same way do it for orders or delivery)

OR

Direct tcode is OLI7BW (orders ) OLI8BW (deliveries)


Now what has to be done is u do have an option of selection i.e based
on sales document number and other options, but what
I want to do is to get the entire data, so will not go with the
selection option.

Give the name of the run, you can give any name you want. Give some
date and time in the future.
Dont click on the execute button, it will be executed in front end if u
do so. What u have to remember is the front end
execution is V2 update, that is it makes the application idle, so go to
programs and click on execute in the background.
That is what is done all the time.
You have to give the output device name, what this window means is if u
want to print your sales order u can print it, but
we are not printing the hard copy of the sales orders we are instead
transferring it into the setup table, so what has to be done
is we should ask the basis consultant for the output device name and
enter it.
In the demo systems it will be LP01, but not in real time.
Next pop up windows ask when you want to run it , the normal scheduler
windower, i.e you want to run it some point of the time
or immediately.

Now when u give immediately and save, u can see at the bottom where it
specifies the background job name. Double click on it
and copy the job name.
Go to tcode sm37(background job log) give the job name and see the
status.

Once this job is done the process at the R/3 side is over.

STEP 4 : Will be done in BI

Replication of the datasources


As we did it Generic u need to replicate by clicking on the
application component i.e SD. you can go inside,
search for the datasource and then click on replicate metadata

Activate the transfer structure

Just click on change on the datasource and click on activate,


doing this will generate transfer structure at the
source system side.

STEP 5: Create the target

Standard procedure is to copy the BI content cube to new one

STEP 6: Create Info package

Right click on create info package.


In real time we create 2 info packages. one is delta and one is for
initial update.

First we use delta update with the selection initialization without


data transfer
Why we do this is,the process is first data will be posted into
base tables, from there the statistical setup will bring
the data to setup table, then the info package will get the data
from the setup table to PSA and after its succesfull
completion a delta queue will be generated at the bw side.

If we go with intialization with data transfer, to transfer the


data from setup table to PSA will take time and the creation
of delta queue will be delayed.Till that time the transacations
will be locked affecting the business.

So what is followed is first at the beginning we will go with


initialization without data transfer, this just bring no
records it just updates the pointer, this is pretty quick and after
succesfull execution we will have the delta queue
generated and the locks can be released.

After this we run full update.

Next we can also see that there will be delta update button
option in the info package, so next load on we go with delta
update.

DELTA QUEUE
Before executing the info package if u check at the r/3 side at
the RSA7 u will not have a delta queue for the datasource
for which u are running.

After running, delta queue will be generated.

Delta queue will have 2 partitions, one is delta part and other
is delta repetation.
When delta records are updated to the delta queue they are kept
in both delta and delta repeation partition.

When we run the info package the data records are moved from the
delta partition into the PSA ( they are emptied from
there), still the delta repetation partion will have delta
records in it

a) delta
b) delta repetation

If assume the delta update failed and now u have no records in


the delta partition at the source system and nor in the BI.
To overcome the issue we have the delta repetation partition,
which will have the delta records irrespective of the delta
load success or failure.

If the first run is succesfull and in the second run, new delta
records are added to the delta partition part and
delta repeatation partition.
Ex:
Delta repetation Delta

1st delta update 5 records 5 records


After i/r run 5 records 0 records
2nd delta update 5 records
After i/r run 10 records
10 records

Now above what we can see is how the delta recors are stored.
In the first delta update, both partitions will have records and
after the infopackage run, delta partition will have
0 records but delta repetation partition will have 5 records.

On the second delta update, delta partition will get new delta
records i.e 10 records, whereas now delta repetation
partition will get new records i.e 10 records also it wil retain
its old 5 records of the last delta update.

After the succesfull loading of the second delta records to the


BI side, that is 10 records, the 1st delta request/records
in the delta repetaion partion gets deleted.

It means the delta repetation partition will always keep last


delta records with it.

TO KNOW HOW THE EXTRACTOR QUEUE GETS THE DATA

1) we shall first check whether there is any pointer or token in the


extractor queue - LBWQ
initially it will not be there, once we modify the records at the
base table level, we can see the pointer getting updated
to the extractor queue i.e LBWQ, here u will not find the record, u
will just have the pointer with date and time stamp.

Now the job is to update this to delta queue so that delta queue
will have the record.
How to do that is through the V3 job.

Tcode is LBWE --> go to the application in our case its 11 -->


click on job control
Here u will get a pop up window with 4 things

Start date
When u want it like immediate, hourly, daily etc give that
parameters
Print parameters
Same as old thing LA01 is for dummy system, this u will get
from basis team
Schedule job
click on it to schedule the job
Job overview
click on it see the overview of the job. Here u will see
job has been finished

That means earlier delta queue was empty , now it will have 1 entry.
Now if u check the extractor queue it will be having
no pointers.
U can look at the data by clicking on the RSA7 queue name and click
on delta part or delta repetation part.

Now create a delta infopackage and select the option delta update and
see the records in BI system.

For our example, 6 datasources,6 infopackages, 6 psa, 6 DTP and 1


target

===========================================================================

DATA SOURCE ENHANCEMENT

Datasource enhancement comes into picture when we want to add fields to the
ready made SAP table.

STEP 1:
When we want to add a field to a table we should first go to the
communication structure.

For VBAP table the communication structure name is MCVBAP.

go to se11 --> give MCVBAP -->and open it

to add particular fields we should append a structure to this


communication structure.

Click on append structure. Now click on append new in the window which
has opened

We always start with name ZA<somename> and give description.

The fields which u add always start with zz. here we are adding profit
centre ZZPCTR.

Give the fields and data type and descripiton. Then click on save and
activate it.
Once the object has been activated, the append strucutre will be
visible at the bottom in the communication structure which we
used to create the append structure.

The field which we have added that data will be avialble in some other
table, we should go and look for the code and
populate from that table.

STEP 2:
Create a project - tcode is CMOD

Give some project name and select the checkbox enhancement assignment
and click create

Give in the next window project description and click on enhancement


assignment

In the next window enter the standard enhancement RSAP0001 (an


enhancement can be assigned to only one project)
After that in the next window u can see the exits.

One will be for transaction data, master data texts, master data
hierarchies.
The names will be like EXIT_SAPLRSAP_001 [transaction data]
EXIT_SAPLRSAP_002

Inside u will find one include inside that include u have to


write your source code (abap code) to populate the data.

===================================================================================
==============================================

CO-PA EXTRACTION

Unlike other departments, finance departmenet will have interaction with all the
departments.

Operating concern :
Finance people will be using this.Its the top most legal entity where
all the finance data is updated.

Visualize the operating concern as cube, all the normalized data coming
and sit here

Operating concern is of 2 types

A) cost based
Product based industry use cost based

B) Accounting based
Service based industry use account based.

Operating concern is of 4 tables. When finance people create an


operating concern at the database level it generates 4 tables.

1) CE1 followed by operating concern name


2) CE2 followed by operating concern name
3) CE3 followed by operating concern name
4) CE4 followed by operating concern name

This operating concern is name is given to us by the finance people.


Generally the operating concern name they take it as 4 digits usually
for the client name for which they are implementing.

For dummy systems the Operating concern name will be IDEA. So CE1IDEA
is the first table name.

We generate the datasource based on our operating concern.

STEP 1: go to tcode KEB0

We usualy start copa datasource 1_CO_PA_800_IDEA


clientnumber_operating concern name

Then select the radio button create also give operating concern
also select the radio button either cost based or account
based
Finally click on execute.

Next u will get a window in that it will ask for the


descriptions, if u know give it or at the tcode entry screen type
=INIT and press enter, system will automatically takes the
description

Below also find all the fields. Which fields you want for extract
strucutre select it and then click on info catalog button

what happens whatever the fields selected there will come to the
extract structure.

Its like normal extract structure once u click on save data


source will be ready and u can start replicating it at the
bw side.

The delta which is used over here is the generic delta

Here also as it is generic delta and it use time stamp, first


data has to be updated to DSO as we will not have images.

If u want to see what time stamp it is maintaining go to KEB2 and


give the datasource name.

===================================================================================
================================

HOW TO CHECK WHETHER DATSOURCE MAINTAINS IMAGES OR NOT ( AFTER BEFORE OR


other images)

Use the ROOSOURCE

ROOSOURCE is a table in R/3 side which has metadata about your


datasources.

go to se11 --> ROOSOURCE and give the datasource name, and check your
delta process.

Now go to a table called RODELTAM. This is a table which has


information of delta process.
Give the delta process over here that is AIE according to the example.
Now if u check the entries u wil not have before image or after image,
so that means its not possibel to update the data into cube
first. It has to update into DSO then into cube.

===================================================================================
==================================

BI CONTENT

Vous aimerez peut-être aussi