Vous êtes sur la page 1sur 71

Your source for the most trusted SAP tutorials,

tips, and training content

Welcome to SAP Experts!


When you become a subscriber to SAP Experts premier online portal, you get first-hand
guidance, step-by-step instruction, best practices, time-saving tips and case studies
from experienced SAP customers, the worlds top consultants and experts at SAP.
We strive to provide you with the tools and information you need to excel at your
job, and help your company get the most from its SAP software systems. Our content
developers work in conjunction with leading industry experts and our own homeadvisors to produce everything you see in SAP Experts.
After youve read through this eBook, youll get a first-hand perspective on the types of
content, topics and advisors that are available to you unlimited as an SAP Experts yearly
subscriber.

Like what youve read?


Join over 125,000 of your peers and competitors who have access to the SAP Experts
library and resources, as well as the newest content uploaded throughout each month.
So, what are you waiting for?!
Get started by contacting Mike OBrien today.
Happy Learning, Fellow Experts!
The SAP Experts Team

Mike OBrien
Mike.obrien@wispubs.com
+1-781-751-8799

Contents
5

How to Change Infotype Data Using Standard Classes


and Methods
by Rehan Zaidi, Senior SAP Technical Consultant
Learning Objectives:

22

The main classes and methods needed for programming infotype updates

The detailed steps for inserting new records in standard and custom infotypes

How to search and delete employee records in an infotype

How to change certain fields for an existing infotype record for employees

Real-Time Operational Reporting with SAP HANA Live


by Dr. Bjarne Berg, Principal and Tax Data Analytics and Business Intelligence Leader in Tax
Technology Compliance, PricewaterhouseCoopers, LLP
Brandon Harwood, BI Consultant, ComeritLabs
Learning Objectives:

40

Explore SAP HANA Live functionality and how it aids in real-time reporting

Learn how to build custom SAP HANA views by extending pre-delivered SAP HANA Live views

Become familiar with the SAP HANA views and their role in reporting

Information Management Options in SAP HANA: Smart


Data Quality
by Don Loden, Director - Data & Analytics, Protiviti
Learning Objectives:

How to construct a real-time enabled data enrichment process in SAP HANA to


cleanse and enhance data quality

Time-saving development best practices to accelerate your SAP HANA flowgraph


development

Contents
48

Enhance Your Visualization and Analytical Capabilities


for Accounts Receivable Aging Using SAP Lumira
by Anurag Barua, Independent SAP Advisor
Learning Objectives:

Prepare your source data prior to extracting it to SAP Lumira

Extract the data

Do basic and advanced navigation for analysis

Do advanced visualization and analysis using value-added features such as calculated


or measured dimensions

61

How to Use SAP BPC with SAP Cloud for Analytics


by Paul Davis, Vice President, VantagePoint Business Solutions
Graylin Johnson, Director of Enterprise Financial Analytics and Enterprise Performance
Management (EPM), Tory Burch
Learning Objectives:

An overview of SAPs new Cloud for Analytics tool

Use cases from both technical and functional perspectives on when to implement SAP
Cloud for Analytics

How to set up a connection between SAP Cloud for Analytics and SAP Business
Planning and Consolidation (BPC)

How to set up integration routines, both for ad hoc and recurring extracts and
retractions

How to Change Infotype


Data Using Standard Classes
and Methods
by Rehan Zaidi, Senior SAP Technical Consultant
In addition to reading data from employee infotypes, developers often need to offer the
option of updating data in standard and custom infotypes. A number of techniques exist
for doing so. These include obsolete techniques such as using Batch Data Communication
(BDC) sessions as well as function modules, such as HR_INFOTYPE_OPERATION, that are
not released for custom use. A more elegant and recommended approach is to use the
standard classes provided by SAP to meet such requirements. This can be easily used by
SAP ERP HCM developers, so having knowledge of these classes is absolutely essential.
I start with a brief introduction of the standard classes and methods that SAP provides for
changing infotype data in custom programs, and then discuss the detailed steps required
to insert records in an infotype. Next, I explain how you can use the standard methods
to delete a single infotype record. Finally, I show how to modify an existing record in an
infotype. Throughout the article, I use infotype 0006 (Addresses) as my example.
In my previous HR Expert article about infotypes, How to Read Infotype Data Using
Standard Classes and Methods, I covered how to read data and infotype texts for an
employee infotype. In this second installment, I show you how to update data in infotypes.
This includes creating, deleting, and modifying an existing record in an infotype.

An Overview of the Classes and Interfaces for Updating


Infotype Data
Before diving into the details of the programming steps related to infotype data access,
lets take a look at the classes and methods used for this purpose. In this section, I discuss
the main classes and methods needed to change
data residing in infotypes. These classes let you read
Note!
the data from both standard and custom-developed
The primary audience for this
infotypes. Later in this article, I discuss the coding
article is SAP HR developers and
required to use these classes to update infotype data.
users. I provide coding examples
One of the most important classes used in the writing
process is class CL_HRPA_MASTERDATA_BL. This
class provides an important factory method, GET_
INSTANCE, which returns a reference to the class CL_
HRPA_MASTERDATA_BL. The class is based on the

and necessary screenprints to


illustrate my points. Readers can
easily adapt the coding examples
used in this article to suit their
requirements.

How to Change Infotype Data Using Standard Classes and Methods

IF_HRPA_MASTERDATA_BL interface, which is one of the most important interfaces used.


The necessary methods of this interface and their purposes are shown in Table 1.

Method name

Purpose

READ

Reads a set of records from an infotype table. For the DELETE and
MODIFY methods, you first need to fetch the corresponding record.

INSERT

Inserts a record into an infotype

DELETE

Deletes an infotype record

MODIFY

Modifies an infotype record

Table 1: The methods of the IF_HRPA_MASTERDATA_BL interface


In addition, SAP provides two important methods,
ENQUEUE_BY_PERNR and DEQUEUE_BY_PERNR,
residing in class CL_HRPA_MASTERDATA_ENQ_
DEQ, used for locking and unlocking employees,
respectively. For each operation (e.g., delete, insert,
and modify), it is necessary to enclose the code
between the ENQUEUE and DEQUEUE method calls.

How to Insert New Records in


Infotypes
In this section, I show how to use the standard
classes and methods to create new records for an
infotype. As an example, I show how to create a
record in infotype 0006 (Addresses) for employee
number 1006 for the Permanent Address (subtype 1).

Note!
Any dynamic actions associated
with the infotype that is being
updated are not executed
automatically by programs that
use the steps mentioned in
this article. You must program
the dynamic actions to be
executed manually. In addition,
any custom-specific validation
checks written in a function exit
for enhancement PBAS0001 are
not called when the methods
mentioned are called.

Step 1. Define Infotype Structures and Assign Data Values


The first step is to define a structure (in this case, WA_P0006) based on the Dictionary
structure P0006. Figure 1 shows the definition of structure WA_P0006.

data

wa_p0006 type p0006.

Figure 1: Define the infotype structure

How to Change Infotype Data Using Standard Classes and Methods

Next, assign appropriate values to the fields of structure WA_P0006 (Figure 2). You must
ensure that all the required fields for the infotype are specified in this block of code. (Make
sure the wa_p0006-infty field is assigned the number of the infotype you are dealing with,
in this case, 0006.)

wa_p0006-pernr
wa_p0006-infty
wa_p0006-subty
wa_p0006-endda
wa_p0006-begda
wa_p0006-anssa
wa_p0006-stras
wa_p0006-ort01
wa_p0006-pstlz
wa_p0006-land1

=
=
=
=
=
=
=
=
=
=

1006.
0006. important partdo not omit
1.
99991231.
20160101.
1.
Burj Al Khalifa.
Doha.
48.
QA.

Figure 2: Assign values to structure WA_P0006

Step 2. Lock the Employees Record


The next step is to call the ENQUEUE _BY_PERNR method to lock the employees record
whose data has to be created. Before that can occur, you need to define a message
handler object to generate the messages that result from the lock method call.
Define a reference (MESSAGE_HANDLER_OBJ) for the class (CL_HRPA_MESSAGE_LIST)
and create its object using the CREATE OBJECT method (Figure 3).

data message_handler_obj type ref to cl_hrpa_message_list.


create object message_handler_obj.

Figure 3: Define and create a message handler


Now you ready to lock the given employees record. Call the static method ENQUEUE_
BY_PERNR of the class CL_HRPA_MASTERDATA_ENQ_DEQ. Then you must specify
the number of the employee to be locked via the PERNR parameter, and also pass the
MESSAGE_HANDLER_OBJ object as a parameter (Figure 4).

How to Change Infotype Data Using Standard Classes and Methods

data: is_ok type xfeld.


cl_hrpa_masterdata_enq_deq=>enqueue_by_pernr(
exporting
tclas
= A
pernr
= wa_p0006-pernr
message_handler = message_handler_obj
importing
is_ok
= is_ok ).

Figure 4: Lock the employees record


If the employees record is successfully locked, the variable IS_OK contains the value X
after the method call (otherwise it contains a blank space). If the personnel number is
already locked, IS_OK contains a blank space.

Step 3. Call the Business Logic


Once the employees record is successfully locked, you can call the GET_INSTANCE static
method of the CL_HRPA_MASTERDATA_BL class (Figure 5).

data lr_masterdata_bl type ref to if_hrpa_masterdata_bl.


cl_hrpa_masterdata_bl=>get_instance(
importing masterdata_bl = lr_masterdata_bl ).

Figure 5: Call the business logic method


This method call returns a reference to the class CL_HRPA_MASTERDATA_BL.

Step 4. Call the GET_INFTY_CONTAINER Method


Next, you call the GET_INFTY_CONTAINER method of the IF_HRPA_MASTERDATA_BL
interface using the reference to the given interface LR_MASTERDATA_BL. You pass, as
parameters, the TCLAS (in this case A Employee). Most importantly, you need to pass
parameter PSKEY (which corresponds to the key of the record to be inserted).
The same MESSAGE_HANDLER_OBJ object that was used earlier is now used for any
message generated as a result of the execution of the method. If there are any problems
with the method, the IS_OK parameter contains a value space; otherwise, it contains X.
The code for this is shown in Figure 6.

How to Change Infotype Data Using Standard Classes and Methods

data lr_container

type ref to if_hrpa_infty_container.

lr_masterdata_bl->get_infty_container(
exporting
tclas
= A
pskey
= wa_p0006-pskey
no_auth_check
= X
message_handler = message_handler_obj
importing
container
= lr_container
is_ok
= is_ok ).

Figure 6: Call the GET_INFTY_CONTAINER method


LR_CONTAINER is an internal table that acts as
temporary storage. This is where you first store the
data that is later inserted into the infotype table.
Once the data is stored in LR_CONTAINER, the insert
is performed via the INSERT method (details about
this are covered in the next step).

Step 5. Assign Data Values to a Container

Note!
You must ensure that the
provided key is the same one
that was used for retrieving the
infotype container in step 3. In
this case, I filled the fields of the
key as shown in Figure 2. If this is
not done, a short dump occurs.

In this step you specify the complete set of values


for each field of the infotype record that is being
created. The container does not have any method to perform such an operation. Rather,
the required MODIFY_PRIMARY_RECORD method belongs to the interface IF_HRPA_
INFTY_CONTAINER_DATA. Therefore, down-cast the container to one using the IF_HRPA_
INFTY_CONTAINER_DATA interface (Figure 7).

data lr_container_data type ref to if_hrpa_infty_container_data.


if is_ok is not initial.
lr_container_data ?= lr_container.
downcasting to IF_HRPA_INFTY_CONTINER_DATA
lr_container ?= lr_container_data->modify_primary_record( wa_
p0006 ) .
endif.

Figure 7: Fill the container with the necessary data

How to Change Infotype Data Using Standard Classes and Methods

Step 6. Call the INSERT Method


The next step is to insert the data held in the
container into the actual infotype table (in this case,
PA0006) using the INSERT method (Figure 8).
Any error and warning messages generated as a
result executing the method are returned in the LR_
MESSAGE_HANDLER object. If the INSERT method
is executed successfully, the variable L_OK has a
value space; otherwise, it has a value equal to X.

Step 7. Commit Changes to the Database


To commit (update) the changes to the database
(actually in the infotype table), you need to call the
FLUSH method of the interface using the reference
variable LR_MASTER_DATA_BL (Figure 9).
Here you pass a space as the value for the NO_
COMMIT parameter.

lr_masterdata_bl->insert(
exporting
no_auth_check
= space
message_handler = message_handler_obj
importing
is_ok
= is_ok
changing
container
= lr_container ).

Figure 8: Call the INSERT method

if is_ok is not initial.


lr_masterdata_bl->flush(
exporting
no_commit = space ).
endif.

Figure 9: Call the FLUSH method

10

Note!
There are two forms of
typecasting. When the static type
of the source variable is more
general than the static type of the
destination variable, it is known as
downcasting. On the other hand,
upcasting occurs when the static
type of the source reference
variable is more specific than,
or the same as, the static type
of the destination variable. The
special casting operator ?= can
be used in both cases. For more
information about casting, refer
to this SAP Help link: https://
help.sap.com/abapdocu_750/en/
abapmove_cast.htm.

How to Change Infotype Data Using Standard Classes and Methods

Once the changes are done and committed to the database you need to call the static
method DEQUEUE_BY_PERNR of the CL_HRPA_MASTERDATA_ENQ_DEQ class. Here
you pass the employee number as a parameter (Figure 10).

cl_hrpa_masterdata_enq_deq=>dequeue_by_pernr(
exporting
tclas = A
pernr = wa_p0006-pernr ).

Figure 10: Unlock the given employees records


The complete code listing for the insertion is shown in Figure 11.

cdata wa_p0006 type p0006.


data message_handler_obj type ref to cl_hrpa_message_list.
create object message_handler_obj.
data: is_ok type xfeld.
cl_hrpa_masterdata_enq_deq=>enqueue_by_pernr(
exporting
tclas
= A
pernr
= wa_p0006-pernr
message_handler = message_handler_obj
importing
is_ok
= is_ok ).
check is_ok eq X.
data lr_masterdata_bl type ref to if_hrpa_masterdata_bl.
cl_hrpa_masterdata_bl=>get_instance(
importing masterdata_bl = lr_masterdata_bl ).
data lr_container

type ref to if_hrpa_infty_container.

lr_masterdata_bl->get_infty_container(
exporting
tclas
= A
pskey
= wa_p0006-pskey
no_auth_check
= X
message_handler = message_handler_obj
importing
container
= lr_container

Figure 11: The complete code for insertion (continued on next page)

11

How to Change Infotype Data Using Standard Classes and Methods

is_ok

= is_ok ).

check is_ok eq X.
data lr_container_data type ref to if_hrpa_infty_container_data.
if is_ok is not initial.
lr_container_data ?= lr_container.
downcasting to IF_HRPA_INFTY_CONTINER_DATA
lr_container ?= lr_container_data->modify_primary_record( wa_
p0006 ) .
endif.
lr_masterdata_bl->insert(
exporting
no_auth_check
= space
message_handler = message_handler_obj
importing
is_ok
= is_ok
changing
container
= lr_container ).
if is_ok is not initial.
lr_masterdata_bl->flush(
exporting
no_commit = space ).
endif.
cl_hrpa_masterdata_enq_deq=>dequeue_by_pernr(
exporting
tclas = A
pernr = wa_p0006-pernr ).

Figure 11 (continued): The complete code for insertion


Once the program is executed, you can create a new record as shown in Figure 12.

How to Delete Records from an Infotype


In this section, I show how to delete records from a standard infotype. The benefit of this
method is that all the necessary standard checks are carried out while performing the
delete operation.
In this example, I am using the record created previously in infotype 006, as shown in
Figure 12.

12

How to Change Infotype Data Using Standard Classes and Methods

Figure 12: A new record is created

Step 1. Call the GET_INSTANCE Method


First, declare a reference variable pertaining to
the interface IF_HRPA_MASTERDATA_BL. Then
call the static method GET_INSTANCE of the class
CL_HRPA_MASTERDATA. The returned reference
(containing the instance of the business logic) is
stored in the reference variable A_MASTERDATA_BL.
This reference variable is used later to call the delete
method for deleting the relevant row from the
infotype. The code is shown in Figure 13.

Step 2. Declare a Message Handler

Note!
For simplicitys sake, I show how
to delete a single record. This
code can be adapted to suit
your users requirements. In
addition, the steps for locking
and unlocking employee records
(described in the previous
section) are necessary, but have
been omitted from this section.
Refer back to the first section for
details about how to do this.

Next, declare a reference to the class CL_HRPA_

Step 1
DATA a_masterdata_bl TYPE REF TO if_hrpa_masterdata_bl.
cl_hrpa_masterdata_bl=>get_instance(
IMPORTING masterdata_bl = a_masterdata_bl ).

Figure 13: Call the GET_INSTANCE method

13

How to Change Infotype Data Using Standard Classes and Methods

MESSAGE_LIST. You also create a MESSAGE_HANDLER object using the CREATE


OBJECT statement (Figure 14).

Step 2
DATA message_handler TYPE REF TO cl_hrpa_message_list.
create object message_handler.

Figure 14: The message handler declaration

Step 3. Call the READ Method


Here you declare a CONTAINER_TAB internal table (based on Dictionary type HRPAD_
INFTY_CONTAINER_TAB). Then, call the READ method of the interface IF_HRPA_
MASTERDATA_BL (Figure 15). This method allows you to read the infotype record to be
deleted. The data read is returned in the CONTAINER_TAB internal table that you already
declared. The method has a number of parameters that allow you to specify the criteria for
reading data from the infotype.

Step 3
DATA container_tab TYPE hrpad_infty_container_tab.
a_masterdata_bl->read(
exporting
tclas
= A
pernr
= 1006
infty
= 0006
subty
= 1
objps
=
sprps
=
mode
= 4
seqnr
= 000
begda
= 20160101
endda
= 99991231
no_auth_check
=
message_handler = message_handler
importing
container_tab = container_tab ).

Figure 15: Call the READ method

14

How to Change Infotype Data Using Standard Classes and Methods

As you can see in the code in Figure 15, the


infotype 0006 row for employee 1006 has subtype
1 (Permanent Address), with start and end dates of
01.01.2016 and 31.12.9999, respectively. The most
important parameter is MODE, the value of which
(in this case, 4) allows you to read the single record
matching the supplied values.
Any messages generated as a result of the method call
are returned via the variable MESSAGE_HANDLER.
After the method is executed, the data read is
contained in the container tab CONTAINER_TAB.

Step 4. Fetch Specific Container Rows for


Deletion

Note!
In the example code, I have
supplied a value of 4 for the
MODE parameter in the READ
method call. The MODE
parameter may be passed a
number of values in addition
to the value 4. Check the
other permissible values that
correspond to the parameter
via the method signature to
see if they may better suit your
requirements.

For this step, you need to know which specific row within the CONTAINER_TAB internal
table corresponds to the infotype record to be deleted. A field symbol <fs> is defined for
pointing to this line in the internal table CONTAINER_TAB (Figure 16). The READ TABLE
statement is then used to assign its reference to the field symbol <fs>. (For simplicitys
sake, lets assume that the row to be deleted is the first row of the container table.)

Step 4
FIELD-SYMBOLS <fs> LIKE LINE OF container_tab.
READ TABLE container_tab INDEX 1 ASSIGNING <fs>.

Figure 16: Get a single container record

Step 5. Call the Delete Method


If the read is successful, the CHECK statement for SY-SUBRC allows the program to
proceed. Then call the DELETE method of the interface IF_HRPA_ MASTERDATA_BL to
delete the relevant infotype row. The row indicated by the field symbol <fs> is supplied for
the formal parameter CONTAINER of the DELETE method (Figure 17).

Step 6. Call the Flush Method to Commit Changes to the Database


Next, check the value of the IS_OK variable. If the DELETE method was successful, the
value of IS_OK becomes X. In this case, the FLUSH method is called to commit the
changes to the database. The NO_COMMIT parameter is supplied with the value SPACE
(Figure 18).

15

How to Change Infotype Data Using Standard Classes and Methods

Step 5
CHECK sy-subrc EQ 0.
DATA is_ok TYPE boole_d.
a_masterdata_bl->delete(
EXPORTING
container
= <fs>
no_auth_check
=
message_handler = message_handler
IMPORTING
is_ok = is_ok ).

Figure 17: Call the DELETE method

step 6
data messages_tab type hrpad_message_tab.
IF is_ok IS INITIAL.
message_handler->get_message_list(
importing
messages = messages_tab ).
ELSE.
a_masterdata_bl->flush(
EXPORTING
no_commit = space ).
ENDIF.

Figure 18: Call the FLUSH method


If the deletion attempt is unsuccessful, use the GET_MESSAGE_LIST method (of the class
CL_HRPA_MESSAGE_LIST) with the reference variable MESSAGE_HANDLER to read the
messages generated as a result of the unsuccessful attempt. The messages are returned in
the MESSAGES_TAB internal table declared earlier in this step.
Once the code is executed, it deletes the record for employee 1006, subtype 1 with
BEGDA and ENDDA equal to 01.01.2016 and 31.12.9999, respectively, from the database.

How to Update Infotype Records


In this section, I show how to change records of a standard infotype. As an example, I
change the Street (STRAS field) of the infotype record created in the Insert section. Here
are the steps in detail.

16

How to Change Infotype Data Using Standard Classes and Methods

Steps 13. Call the GET_INSTANCE Method, Create the Message Handler,
and Call the READ Method
The first three steps are the same as shown previously in the previous section of this article.
Declare a reference variable pertaining to the interface IF_HRPA_MASTERDATA_BL,
and call the static method GET_INSTANCE of the class CL_HRPA_MASTERDATA. Then
declare an internal table CONTAINER_TAB (based on Dictionary type HRPAD_INFTY_
CONTAINER_TAB). You also declare a reference to the class CL_HRPA_MESSAGE_LIST.
Then create an object MESSAGE_HANDLER using the CREATE OBJECT statement.
Next, you call the READ method for the interface IF_HRPA_MASTERDATA_BL. This
method allows you to read the record that is to be changed. The read data is returned in
the CONTAINER_TAB you declared earlier. The code written so far is shown in Figure 19.

Step 1
DATA a_masterdata_bl TYPE REF TO if_hrpa_masterdata_bl.
cl_hrpa_masterdata_bl=>get_instance(
IMPORTING masterdata_bl = a_masterdata_bl ).
Step 2
DATA message_handler TYPE REF TO cl_hrpa_message_list.
create object message_handler.
DATA container_tab TYPE hrpad_infty_container_tab.
Step 3
a_masterdata_bl->read(
EXPORTING
tclas
= A
pernr
= 1006
infty
= 0006
subty
= 1
objps
=
sprps
=
mode
= 4
seqnr
= 000
begda
= 20160101
endda
= 99991231
no_auth_check
=
message_handler = message_handler
IMPORTING
container_tab = container_tab ).

Figure 19: The first three steps for the program to change the infotype

17

How to Change Infotype Data Using Standard Classes and Methods

As you can see, the row of the infotype 0006 for employee 1006 has subtype 1, with start
and end dates of 01.01.2016 and 31.12.9999, respectively. Any messages generated as
a result of the method call are returned via the variable MESSAGE_HANDLER. After the
method is executed, the data read is contained in the container tab CONTAINER_TAB.

Step 4. Find Specific Rows to be Changed


Next, you need to know which specific row within the internal table CONTAINER_TAB
corresponds to the infotype record to be deleted. A field symbol <fs> is defined for
pointing to this line in CONTAINER_TAB (Figure 20). The READ TABLE statement is then
used to assign its reference to the field symbol <fs>.

Step 4
FIELD-SYMBOLS <fs> LIKE LINE OF container_tab.
READ TABLE container_tab INDEX 1 ASSIGNING <fs>.

Figure 20: Read the first line of CONTAINER_TAB


If the read is successful, the CHECK statement for SY-SUBRC allows the program to proceed.

Step 5. Get the Existing Infotype Row and Specify the Fields to be
Changed
Before calling the MODIFY method, you need two container references. The first reference
must point to the original container (containing the existing infotype record) and the
second must refer to the container of the record representing what the record looks after
the change has been made.

Step 5 a)
CHECK sy-subrc EQ 0.
DATA is_ok TYPE boole_d.
data lr_container_data type ref to if_hrpa_infty_container_data.
data changed_record_data type ref to if_hrpa_infty_container.
lr_container_data

?= <fs>.

Figure 21: Get the existing records container

18

How to Change Infotype Data Using Standard Classes and Methods

The field symbol <fs> points to the existing container record. Using this, form the
container for the changed record, the reference of which is stored in the CHANGED_
RECORD_DATA variable. Define two references, LR_CONTAINER_DATA and CHANGED_
RECORD_DATA, to the interfaces IF_HRPA_INFTY_CONTAINER_DATA and IF_HRPA_
INFTY_CONTAINER, respectively.
Assign the field symbol <fs> to the LR_CONTAINER_DATA variable.
To get the contents of the currently stored record in the infotype, call the PRIMARY_
RECORD_REF method using the LR_CONTAINER_DATA variable (Figure 22). The
retrieved row is stored in WA_P0006.

Step 5

b)

data wa_p0006 type ref to data .


lr_container_data->primary_record_ref(
importing pnnnn_ref = wa_p0006 ).

Figure 22: Get the existing infotype record


Next, change the value of the STRAS field of this structure to Salwa Road using field
symbol <fs1> and the ASSIGN statement.
Then call the MODIFY_PRIMARY_RECORD method for the LR_CONTAINER_DATA
variable and assign the returned CHANGED_RECORD_DATA container reference. Now
the CHANGED_RECORD_DATA reference points to the container corresponding to the
infotype 6 record for which Salwa Road is stored in the STRAS field (Figure 23).

Step 5

c)

field-symbols <fs1> type p0006.


assign wa_p0006->* to <fs1>.
<fs1>-stras = Salwa Road.
changed_record_data ?=
lr_container_data->modify_primary_record(<fs1>).

Figure 23: Specify the fields to be changed

19

How to Change Infotype Data Using Standard Classes and Methods

Finally, the MODIFY method is called (Figure 24). It is passed appropriate values pertaining
to parameters OLD_CONTAINER (current values) and CONTAINER (modified values).

Step 5

d)

a_masterdata_bl->modify(
exporting
old_container
= <fs>
no_auth_check
=
message_handler = message_handler
importing
is_ok
= is_ok
changing
container
= changed_record_data ).

Figure 24: Call the MODIFY method


Check the value of the IS_OK variable. If the MODIFY method was successful, the value of
IS_OK becomes X. In this case, the FLUSH method is called to commit the changes to the
database. Then supply the NO_COMMIT parameter with the value SPACE (Figure 25).

step 6
data messages_tab type hrpad_message_tab.
IF is_ok IS INITIAL.
message_handler->get_message_list(
importing
messages = messages_tab ).
ELSE.
a_masterdata_bl->flush(
EXPORTING
no_commit = space ).
ENDIF.

Figure 25: Call the FLUSH method


If the attempt is unsuccessful, use the GET_MESSAGE_LIST method (of the class CL_
HRPA_MESSAGE_LIST) with the reference variable MESSAGE_HANDLER to read the
messages generated as a result of the unsuccessful attempt. The messages are returned in
the MESSAGES_TAB internal table declared earlier in this step.

20

How to Change Infotype Data Using Standard Classes and Methods

Once the code is executed, it modifies the record for employee 1006, subtype, with
BEGDA and ENDDA equal to 01.01.2016 and 31.12.9999, respectively, from the database.
As you can see in Figure 26, the Street and House No field is successfully changed to
Salwa Road. n

Figure 12: A new record is created

Rehan Zaidi is a consultant for several international SAP clients (both on ite and remotely) on a wide
range of SAP technical and functional requirements, and also provides writing and documentation
services for their SAP- and ABAP-related products. He started working with SAP in 1999 and writing
about his experiences in 2001. Rehan has written several articles for both SAP Professional Journal
and HR Expert, and also has a number of popular SAP- and ABAP-related books to his credit. You
may reach Rehan via email at erpdomain@gmail.com.

21

SAVE $200 ON YOUR MEMBERSHIP


WHEN YOU ENROLL HERE!

Real-Time Operational
Reporting with SAP
HANA Live
by Dr. Bjarne Berg and Brandon Harwood
SAP HANA is a combination of hardware and
software that optimizes database technologies to
exploit the speed of fast in-memory processing and
parallel processing capabilities of multi-core systems.
SAP HANA Live (formerly known as part of the
Composite Analytic Framework, or CAF) leverages
this technology with hundreds of pre-delivered SAP
HANA Live views. These additional views enable
companies to quickly start developing real-time
operational reporting on top of transactional data
from the SAP Business Suite or SAP S/4HANA
transactional systems, without having to extract and
move the operational data to data warehouses.

Note!
Before beginning to use SAP
HANA Live, companies should
familiarize themselves with SAP
HANA studio from a modeler
perspective. They can do this
by sending users to either SAP
Educations three-day HA300
Implementation and Modeling
class or the two-day HA900 SAP
HANA Live class. These classes
are fundamental since SAP HANA
studio is the primary interface
used to administrate, model, and
maintain the SAP HANA or SAP
HANA Live systems.

It is important to understand that reporting in SAP


HANA relies on three primary views types: attribute,
analytical, and calculation views. Each view fulfills
a specific function, which we explore in detail. SAP
HANA Live leverages and extends these views into a virtual data model (VDM), which then
can be used in reporting. Currently, SAP provides more than 800 pre-delivered views with
SAP HANA Live that you can install on your Business Suite or SAP S/4HANA system.
SAP HANA Live is targeted at companies that run SAP Business Suite applications on
SAP HANA. The views inside can also be combined with data from non-SAP applications.
To understand SAP HANA and SAP HANA Live views in general, you have to know the
difference between the different view types.

Attribute Views
Attribute views consist of one or more tables and are used to qualify the data in some
way. Attribute views are the basic building blocks in the SAP HANA studio modeler. These
views are reusable and are somewhat comparable to dimensions and master data in SAP
Business Warehouse (BW). Most attribute views are built on master data, but are not
technically restricted to this data.

22

Real-Time Operational Reporting with SAP HANA Live

When you build an attribute view, you probably want it to conform to many types of
transactions. For example, if you build an attribute view of customer data, you want this
view to contain enough meaningful fields so that it can be joined to other data foundations
or transactions such as sales orders, billing, and payments. In other words, the attribute
view can be reused in ways that simplify the development of analytical views.
In general, attribute views typically contain text, but they can be built to include many
different tables. For example, in Figure 1, customer data is being joined to the sales
organization and the country data tables to get a more complete view of customers.

Figure 1: An example of attribute view modeling in SAP HANA studio


You can add tables by selecting them from the navigator pane in SAP HANA studio and
dragging and dropping them into the scenario section in the Data Foundation box. You
can add table joins by clicking and dragging the line options from one table to another in
the Details section.
As you can see in the output section in Figure 1, you can select which fields you want
to include in the attribute view and you can add your own calculated columns. These
columns, in turn, are where calculations are then executed in memory on the database
instead of on the application servers or in queries. This process can lead to dramatic
performance improvements in reports that access these views. In other words, the more
calculations you push to SAP HANA views, the faster an application is able to run.
Finally, you can add filters and new key fields to make the attribute view as useful as
possible. As shown in Figure 1, all the fields that are exposed in the attribute view are
indicated with an orange circle next to the field in the Details pane.

23

Real-Time Operational Reporting with SAP HANA Live

Analytic Views
Analytic views bring transactional data and attribute views together. Typically, this involves
dragging one or more attribute view into the logical join in the Scenario pane and then
adding transactional data to the Data Foundation. After you complete this step, you can
join the attribute, the views, and the data in the Data Foundation by clicking and dragging
the fields you want to join from the various views and tables. Most people quickly find this
very intuitive. For example, in Figure 2, the products and customer attribute views are
being joined with sales orders to create analytical views.

Figure 2: An example of analytical view modeling in SAP HANA studio


Experienced BW developers will find that this is logically very similar to dimensional
modeling in an InfoCube. Once the view is validated and saved, it is instantly available
for reporting to those who have been granted access. The analytical view can then be
consumed by most BusinessObjects tools, as well as other third-party tools.

Calculation Views
The calculation view is the foundation of SAP HANA Live. Actually, SAP HANA Live is
based on VDMs. These models are composed of several reusable calculation views that
can be combined with both attribute and analytic views. These calculation views combine
several analytic views (with many fact tables) into one reportable source. For example, you
can see an illustration of the basic principles of a calculation view in Figure 3.
A fundamental benefit of this arrangement is that calculation views that make up VDMs can
be modified and extended to include custom fields and tables as necessary. For example, if
you have added a new Z-field or table in the SAP Business Suite, they will not be found in the
standard SAP HANA Live calculation views. These have to be added as an extension.

24

Real-Time Operational Reporting with SAP HANA Live

Figure 3: Schematics of a calculation view


To avoid affecting the many sub-components in the VDMs, you have to cover some basics
before beginning any enhancements. For example, when you create a new information view
within SAP HANA studio, there is a Copy From option that you should select (Figure 4).
Using this option enables you to copy a standard pre-delivered SAP HANA Live view and
then extend it.

Figure 4: Add a new calculation view


SAP offers a tool that assists with the view copying and extension process (available since
mid-2014) called SAP Live Extension Assistant. This tool automates much of the process,
thereby reducing the effort required to add additional fields to your SAP HANA view.
Using this tool enables you to copy and extend reuse and query views within SAP HANA
Live. Right-click the view to copy, and then select the Create New Extension View option

25

Real-Time Operational Reporting with SAP HANA Live

from the context menu. This action displays all of the underlying fields where you can
select those that need to be included in the reuse and query view outputs (Figure 5).

Figure 5: The SAP HANA extensibility assistant tool


Copying the view in this way assures that you are not affecting others using the standard
view and also speeds up development. This way, you only have to focus on the additions
you are making and do not have to know all the underlying complexity that may exist in
the views (some can have eight to 10 nested views and custom joins).
Another feature of SAP HANA Live is that SAP has developed more than 800 views that
are delivered with this tool. These pre-defined views can be copied and customized to
suit almost all company-specific reporting needs. Although most companies find these
views very useful, it is not unusual to still need to build or extend 10 percent to 20 percent
more views to meet the requirements of specific reporting and custom processes (i.e., for
Profitability Analysis [CO-PA]).

Virtual Data Models


VDMs take the foundational views of SAP HANA and categorize them into several view
types, including private views, reuse views, and query views.
Private views are the most basic view type that directly use SAP tables. Reuse views are
those views created by combining private views and other composite tables (like those
illustrated in Figure 3). Finally, the query views are one or more reuse views joined
together to report on various business scenarios. Normally, these query views are directly

26

Real-Time Operational Reporting with SAP HANA Live

used for reporting. As of October 2015, there were 242 query views available for reporting
straight out of the box when installing SAP HANA Live.

SAP HANA Live Browser


To make it easier to locate and understand the views, SAP has created an SAP HANA
Live browser tool (Figure 6). This web page provides information about all the content
available after you have installed SAP HANA Live on your transaction system. To access the
SAP HANA Live browser, navigate to http://<WebServerHost>:80<SAPHANAinstance>/
sap/hba/explorer/index.html in your web browser.

Figure 6: The SAP HANA Live browser


This browser tool enables you to search for available views and see how they are built. For
example, in Figure 6, the focus is on MM - Materials Management (see the left selection
panel). As you can see, this enables you to see all the available views in the different
categories. For reporting, you would be particularly interested in the query views.

Reporting on SAP HANA Live with BusinessObjects


In general, reporting in the SAP BusinessObjects suite is done on top of a universe when
consuming SAP HANA Live data. The universe layer provides a customizable semantic
layer between the SAP HANA calculation views and the end-reporting tool. In order to use
the real-time reporting capabilities of SAP HANA Live, the query views are used straight
on top of the SAP Business Suite without any data being moved to an enterprise data
warehouse (EDW) data mart (as its done in SAP BW).

27

Real-Time Operational Reporting with SAP HANA Live

It is important to note that some of the newer SAP data visualization and reporting tools
(such as SAP Lumira) have native connectivity to SAP HANA and can skip the need to
connect through a BusinessObjects universe, allowing you to connect directly to the views.
This direct connection to SAP HANA query views can allow for less latency in reporting,
which means even faster front-end execution of reports and visualizations. However,
other more traditional BusinessObjects tools normally consume the SAP HANA Live views
through a universe.
As of SAP BusinessObjects Business Intelligence (BI) 4.1, SAP has included a new option in
the Information Design Tool (IDT) to allow you to directly convert an SAP HANA view into
a deployable universe. This option dramatically increases the speed of the deployment of
the views into BI. Most of the hundreds of query views can be exposed in as little as a few
days work, making the deployment of SAP HANA Live with BusinessObjects tools very
efficient. As we look at the step-by-step process for deploying the views into a universe, it
is important to note that this feature is only available in the 4.1 and higher release of the
IDT.

Creating an SAP HANA Universe


Begin by creating a new project folder in the IDT. Then right-click and select New > SAP
HANA Business Layer from the context-menu options (not shown). This opens the screen
shown in Figure 7.

Figure 7: Add a new business layer in the IDT for SAP HANA Live-based universes
You are prompted to give the business layer a name as well as to define the name of the
data foundation that is being created. We recommend that the suffix BL or DF be added
to the end of the business layer and data foundation names, respectively. These suffixes
help you quickly distinguish these different universe components.

28

Real-Time Operational Reporting with SAP HANA Live

Click the Next button and, in the screen that opens (Figure 8), select the applicable
connection to your SAP HANA Live system. Notice here that this connection is a .cnx type.
CNX connection types are local connections that are not stored in the BusinessObjects
connection repository. This connection type cannot be used in reports that are published
to the BusinessObjects BI 4.1 platform since users cannot be authorized. Therefore, you
need to change the connection type from .cnx to the secure connection type .cns instead.

Figure 8: Add a connection in the IDT


Select the check box next to your desired connection, in this case, DEMO.cnx, and then
click the Next button to complete the connection change process shown in Figure 9.

Figure 9: Add a secure .cns connection in the IDT


You also need to create a secure .cns connection within the BusinessObjects connection
repository (Figure 9) to enable reports to be authored and shared on this universe within
the BusinessObjects 4.1 platform. After you define this connection, you can change your
SAP HANA Live connection to point to the new .cns connection in the repository. Click the
Finish button and the screen in Figure 10 opens.

29

Real-Time Operational Reporting with SAP HANA Live

Figure 10: Connect the business layer in the IDT to an SAP HANA Live query view
After you have completed the connection change, right-click your local project and select
New HANA Business Layer from the context-menu options. Next, with this connection
established, select the SAP HANA Live view on which you would like to create the
universe. In our example, we want to connect to the pre-delivered SAP HANA Live query
view called SalesOrderQuery that we found in the SAP HANA browser (Figure 10). Select
the view and click the Finish button to complete the creation steps for SAP HANA business
layer for your universe.

Publishing a Universe
The next step is to publish this new universe to the BusinessObjects repository so that the
developers and power users can access it directly. To complete this step, right-click the
new SAP HANA business layer and select Publish > To a repository . After publishing is
completed, this universe, which is based on the SAP HANA Live query view, is available
for consumption in any of the SAP BusinessObjects reporting tools by power users and
developers (Figure 11).
This method of exposing SAP HANA query views is an efficient way of moving this data to
the reporting tools, but it does not come without some custom configuration. By default,
the universe settings do not enable query stripping and limit the row count that can be
returned by the universe. It is therefore very important that you change these settings in
the query properties section in the IDT (Figure 12). Select the two check boxes as shown
in the Query Options section of Figure 12 and press Enter to save.

30

Real-Time Operational Reporting with SAP HANA Live

Figure 11: Publish a universe based on an SAP HANA Live view to a BusinessObjects repository

Figure 12: Change the query settings in the IDT for SAP HANA Live view-based universes

31

Real-Time Operational Reporting with SAP HANA Live

Alternatives to Using Universes


As an alternative to using BusinessObjects universes, certain SAP reporting tools such as
Lumira, Design Studio, and Analysis for Office can natively consume SAP HANA views (a
better option than universes, in most cases). This means that a universe is not required and
that data can be consumed directly from the underlying SAP HANA system through the
SAP HANA Live views and other views you might have created or extended. The first step
to connect to an SAP HANA view with Lumira is to create a new document and select your
data source type (Figure 13).

Figure 13: Connect to SAP HANA and select a new source system
Once you have selected the SAP HANA option, you need to log in to the system with your
access credentials (user name and password). (These are the credentials you got from your
security team that monitors and administers the SAP HANA system in your organization.)
Then click the Next button, which opens the screen in Figure 14.
Next, select the SAP HANA view you want to use as the basis of your Lumira analysis
(Figure 14). In this case, use the same SalesOrderQuery view that you built the
BusinessObjects universe on previously to illustrate the query views versatility.
After you select the view, Lumira displays the dimensions and measures associated with
that view (Figure 15). By default, all dimensions and measures are selected for use in the
report. However, these dimensions and measures can be deselected, if necessary.
After you have selected the fields to use in your data visualizations in Lumira, click the
Create button to complete the addition of the new SAP HANA Live dataset. The dataset

32

Real-Time Operational Reporting with SAP HANA Live

Figure 14: Add a connection in Lumira to the SAP HANA Live views

Figure 15: Select fields from the SAP HANA Live views to use in Lumira
is now accessible within Lumira where it can be manipulated as if it were a universe. This
means that all normal functionality, such as custom calculated fields, custom hierarchies,
and other data formatting options, are available.
It is important to note that if a calculation is used by many users and is consistent across
the organization, it is often better to add new calculated fields directly in the SAP HANA
Live view instead of in Lumira. This is because of the much faster speed offered by SAP
HANA relative to application servers.

33

Real-Time Operational Reporting with SAP HANA Live

This method of connecting to the views from SAP HANA Live streams and provides access
to the data in real time. As a result, data is updated after every refresh directly from the
transactional data in the Business Suite on SAP HANA. Users now have the benefits of fast
performance merged with the simplicity of graphically analyzing data in Lumira. They also
can add from a vast number of fields now exposed to the front-end tool (Figure 16).

Figure 16: Access real-time data from SAP HANA Live views in Lumira

Modifying SAP HANA Live Views


It is possible to modify and extend any of the 800 pre-delivered views that come as a part
of SAP HANA Live. The most common example is to extend the views to add custom fields
that may be proprietary to your specific business. The first step to extend an SAP HANA
Live view is to find the base view you want to use as a basis for the extended view. This
information is found in the SAP HANA Live browser (Figure 6). Once you locate the view
on which you want to base your extensions, you are ready to begin the process.
In SAP HANA studio, right-click the package in which you want to store this custom view,
select New, and then select Calculation View from the context-menu options. In the New
Information View window that opens (Figure 17), select the Copy From check box and
click the Browse button to locate and select the view you want to use (Figure 18).

34

Real-Time Operational Reporting with SAP HANA Live

Figure 17: Create a new information view

Figure 18: Navigate to the new view

35

Real-Time Operational Reporting with SAP HANA Live

After you select your view, it is important to take note of the views Data Category.
The views Data Category needs to be set as a Cube to ensure it is visible to the
BusinessObjects reporting tools (Figure 19).

Figure 19: Set the Data Category view


After opening the view and setting the Data Category, you can add any fields that you
want to the semantic layer. A list of available fields is presented that can be propagated to
the reporting layer. In this example, extend the SalesOrderQuery to include SalesDistrict.
First, select SalesDistrict and right-click to add this field to the output. To add the
SalesDistrict field to the output layers, right-click SalesDistrict again and select the
Propagate to Semantics context-menu option (Figure 20).

Figure 20: Add a field to an existing view

36

Real-Time Operational Reporting with SAP HANA Live

To complete the process of adding the new field, you need to add a new join to the view.
Click the join icon (boxed in red in Figure 21), then drag and drop the new SalesDistrict
field into the join.

Figure 21: Add a join

After you add this join to the view, it needs to be reconnected to the data flow and then
propagated to the semantics layer. The join type is also defined in this step. This includes
the join cardinality as well as the join type, in the case, a left outer join. Since only a single
field is being adding to the view, the join consists of only two items, the SAPClient and
SalesDistrict (Figure 22).
Now that the SalesDistrict join is complete, you need to propagate the new field to the
semantics layer. Right-click the SalesDistrict field in the join and select Propagate to
Semantics from the context-menu options (Figure 23). When prompted, select the OK
button to confirm that that new field has been propagated upward to the aggregation and
semantic layers.
The view has been extended with the new field from the transaction system. It can now be
consumed in the BusinessObjects reporting tool suite after it has been published in the
IDT to a universe or to Lumira via direct SAP HANA connections.

37

Real-Time Operational Reporting with SAP HANA Live

Figure 22: Define the join

Figure 23: Propagate the new field

38

Title of article

SAP HANA Live is a new product offering for most organizations using SAP software.
However, the potential use of the content in the views it provides is far reaching. Some
organizations may simply choose to push most of their real-time operational reporting into
this tool, thereby reducing the need for moving all operational data into SAP BW.
It also reduces the need for faster ETL processes to get access to real-time analytics.
The data is not moved at all and stays inside the transaction system. In other words,
the EDW can become what it was intended to be: a platform for planning, budgeting,
forecasting, consolidation, summarized management reports, and what-if analysis. At the
same time, operational reporting goes back to the transaction system where it belongs.
In other words, with SAP HANA Live you can take advantage of the dramatic performance
improvements of the SAP HANA database while simplifying the reporting landscape,
thereby reducing data latency between systems and potentially shrinking the footprint of
many EDWs. n
Dr. Bjarne Berg is a Principal and the Tax Data Analytics and Business Intelligence Leader in
Tax Technology Compliance (TTC) at PricewaterhouseCoopers (PwC), LLP. He is responsible for
analytics and go-to-market strategy. Dr. Berg is an internationally recognized expert in BI and a
frequent speaker at major BI and SAP conferences world-wide, with over 20 years of experience
in consulting. He regularly publishes articles in international BI journals and has written five books
on business intelligence, analytics, and SAP HANA. Dr. Berg attended the Norwegian Military
Academy, and served as an officer in the armed forces. He holds a BS in Finance from Appalachian
State University, an MBA in Finance from East Carolina University, a Doctorate in Information
Systems from the University of Sarasota, and a Ph.D. in IT from the University of North Carolina.
Brandon Harwood is a BI Consultant for Comerit, specializing in SAP BusinessObjects design,
development, and implementation, as well as developing and delivering training on several report
development tools on various platforms. Brandon also has extensive experience leveraging SAP BW
on HANA and SAP HANA Live on many client projects

39

SAVE $200 ON YOUR MEMBERSHIP


WHEN YOU ENROLL HERE!

Information Management
Options in SAP HANA: Smart
Data Quality
by Don Loden
Data quality is always a challenge for organizations seeking a robust analytics solution. If
the data does not conform to high quality standards, then the analytical capabilities of the
solution are highly diminished.
Data quality is even more important for a real-time analytics solution based on SAP
S/4HANA. Take a recent experience I had with a company with a dashboard that is
powered by an SAP HANA ERP system. This analytics solution allowed the company to
have unprecedented access to real-time operational data. The business was very excited
about the new capabilities and the value that this would bring to the organization.
However, when the solution was demonstrated to the chief financial officer (CFO), all the
excitement around the new capabilities began to stall. The CFO had so much knowledge
of the business and history of the company that he could immediately tell that the
numbers in the dashboard were not possible. All development was halted until the data
quality issues could be remediated.
Problems like this are very real when speaking about the real-time data access that SAP
HANA provides. This is a problem that cannot be handled by legacy batch-based tools
as the solution would be in real time. Fortunately, SAP has a pretty unique solution in SAP
HANA to help with real-time data quality issues: SAP HANAs Smart Data Quality (SDQ) tool.
Smart Data Quality in SAP HANA allows a developer to combine functionality to fully
transform data that would normally be limited to SAP Data Services or other batch-based
extract, transform, and load (ETL) programs. It can perform those transformations in
real time as the records are created in a source system. Developers can provide dataquality enrichment to a person or firm/business, and address data literally as the data is
being created in the source system. Figure 1 shows a Smart Data Quality flowgraph that
performs operations to accomplish these transformations.
This example shows a source table, Z_USA_CUSTOMERS. This table contains both the
customer name and business names, as well as the associated address of the customer.
This is a typical layout for a variety of systems as well as a good starting structure for a
reporting dimension table. I show how to use the flowgraph that is constructed in Figure
1 to cleanse the customer and customer address information to enable greater reporting
capabilities when this customer data is used in reporting as a dimension.

40

Information Management Options in SAP HANA: Smart Data Quality

Figure 1: Example of a Smart Data Quality flowgraph


To accomplish this, create the flowgraph file, which is just another type of file in SAP HANA
development, by navigating to the development perspective in SAP HANA studio and
selecting a package. Right-click that package to produce a pop-up menu. The example
package in Figure 2 is the package titled donloden.

Figure 2: The donloden package

41

Information Management Options in SAP HANA: Smart Data Quality

After you right-click the donloden package, click New from the pop-up menu. Then click
Other . A new window appears where you can browse for the type of object you wish to
create. To do this, the easiest method is to start typing the word flow. This starts a search in
SAP HANA to produce the selection called Flowgraph Model. This searching and selection
is shown in Figure 3.

Figure 3: Create the new flowgraph


Click the Next button to create the basic Flowgraph Model that is the starting point of the
one that is shown in Figure 1. You then step through many of the tools in the tool palette
to create the Smart Data Quality flowgraph to cleanse and enrich the data.
The first step is to select the source of the customer data that is being cleansed in this
example. Select the Data Source node under the General section from the tool palette on
the right side of the screen, as shown in Figure 4.
This first step is important as you now have data to read into the flowgraph for cleansing.
Also, pay special attention to the Realtime Behavior: field, highlighted in Figure 4. If you
want to make the reads in real time as the data is occurring natively in the source table,
then check the Realtime check box. Now, as data is created in the source system, it flows
into the flowgraph to be transformed on the way into SAP HANA. In Figure 5 you can see
where to filter the incoming data.
In this example I do not filter any of the incoming customer records, as I want to show how
successful SAP HANA can be with cleansing and enriching records. The Filter node is only
present for reference, as it can be very useful to limit your cleansing result set. For instance,

42

Information Management Options in SAP HANA: Smart Data Quality

Figure 4: Select your source data table for cleansing

Figure 5: Filter node configuration and the Filter Expression: field location
if you were only licensed for United States address cleansing, it would make sense to filter
on a country field. Now, I examine the heart of the SDQ cleansing flow: the Cleanse node.
The Cleanse node is shown in detail in Figure 6.
Notice in the Cleanse node that there are three tabs: Input Fields, Output Fields, and
Settings. The Cleanse node is different from many other transformation objects (under
the palette on the right side of the screen) in that the developer has access to cleansed

43

Information Management Options in SAP HANA: Smart Data Quality

data from an SAP system as well as various postal services around the world. The way this
node is used is that fields are mapped fields from input tables or source data and then you
select the output fields that you would like to be visible and output to the target table or
system. Table 1 describes the three tabs and their functions.

Figure 6: Input Field configuration of the Cleanse node

Tab Name

Tab Description

Input Fields

Fields for the source table or system that can be mapped to various input
fields for cleansing. These include address data as well as person and
firm (business names) data.

Output Fields

The Output Fields tab is the third-party reference data elements


(provided by SAP and various postal organizations) that are returned by
the Cleanse node.
Note that licensing from third-party postal agencies is required for the
postal organization fields functionality to operate.

Settings

Default settings that are found here can be altered to suit many common
development tasks.

Table 1: Descriptions of the Cleanse node configuration tab options

44

Information Management Options in SAP HANA: Smart Data Quality

To configure the Cleanse node, you map the input fields into the Input Fields tab, as
shown in Figure 6. The fields that I mapped for this sample exercise are listed in Table 2.
Input field type

Cleanse Input field

Mapping/table field

Address

Street Address: Free Form

ADDRESS

Address

Street Address: Free Form 2

CITY

Address

Street Address: Free Form 3

POSTALCODE

Firm

Firm

FIRM

Table 2: Cleanse node of the input tab field mappings


The Cleanse Input field is the function of the Cleanse node that you wish to use. Notice
that I have used the Free Form option in the Street Address section (Figure 6). This means
that any relevant street address data that is country specific in any of the mapped fields
from the source table is considered for cleansing. This is a feature that allows for cleansing
and enriching of data fields even if the contents of the table fields are mismatched to the
field descriptions. For example, if there is address line information in the city field, the
cleansing operations would still work. Now that the fields are mapped, consider which data
elements to output from the Cleanse node. These are shown in the Output Fields tab in
Figure 7. You select them by setting Enabled to True. The bottom screen is open already.
It expands when you select the Cleanse node as shown in the figure.

Figure 7: Output field configuration of Cleanse node

45

Information Management Options in SAP HANA: Smart Data Quality

As a review, these are the cleansed data elements that I return from the Cleanse node in
my SDQ flowgraph:

City

Region

Postcode

Address

Now that the data is cleansed and enriched, it is time to output the data to an SAP HANA
target table. This is performed by using a Data Sink from the General section of the tool
Palette on the right side of the screen in Figure 8. You drag and drop from the right side
Palette onto the middle white canvas to use it in the same you use other nodes and tools.
The section of the screen at the bottom dynamically changes based on what is selected.

Figure 8: Target template table configuration and schema destination


The target configuration is straightforward. You declare the Authoring Schema that is
the target or destination schema location in SAP HANA by typing the schema where you
will land the data in SAP HANA into the Authoring Schema field. In this case you put the
data into the DLODEN schema since that is what we specified in the Authoring Schema
field. You also need to declare a Catalog Object to create in this schema as the Data Sink
creates a table in SAP HANA based on the fields and data types that were used as outputs
in the Cleanse Node. This is performed by choosing a table name and entering this name
into the Catalog Object field. For my example, the Catalog Object is called Z_CLN_CUST_
ADDR. (You do not need to save as SAP HANA saves with every field exit.) This creates a
new table in the DLODEN schema called Z_CLN_CUST_ADDR.

46

Information Management Options in SAP HANA: Smart Data Quality

To view the data in the new table after executing the flowgraph, you merely select the data
via SQL as you would in any other table in SAP HANA. Figure 9 shows the table I made in
this example.

Figure 8: Target template table configuration and schema destination


You can see all the new STD_ cleansed fields that are now present in the target table.
All the original source data is left unaltered for this example, and the new cleansed data
elements are side by side in the table with the original field contents. This is helpful for
debugging as well as for use as display items in downstream applications.
This completes the development of the real-time-enabled SAP HANA SDQ flowgraph.
As you can see it is a graphical development environment with many concepts that an
ETL developer would find familiar. This is important as mature SAP HANA companies are
focusing increasingly more on cleansing and data enrichment. These capabilities exist to
extend the development platform to perform real-time transformation tasks that set SAP
HANA apart from other data-warehousing tools. n
Don Loden is an information management and information governance professional with
experience in multiple verticals. He is an SAP-certified application associate on SAP EIM products.
He has more than 15 years of information technology experience in the following areas: ETL
architecture, development, and tuning; logical and physical data modeling; and mentoring on
data warehouse, data quality, information governance, and ETL concepts. Don speaks globally
and mentors on information management, governance, and quality. He authored the book
SAP Information Steward: Monitoring Data in Real Time and is the co-author of two books:
Implementing SAP HANA, as well as Creating SAP HANA Information Views. Don has also authored
numerous articles for publications such as SAPinsider magazine and Information Management
magazine. You may reach Don via email at don.loden@protiviti.com.

47

SAVE $200 ON YOUR MEMBERSHIP


WHEN YOU ENROLL HERE!

Enhance Your Visualization


and Analytical Capabilities
for Accounts Receivable
Aging Using SAP Lumira
by Anurag Barua, Independent SAP Advisor
Company XYZ, a Fortune 500 global manufacturing company, has long been plagued by
poor collection of accounts receivable (A/R). In other words, XYZ regularly has large open
A/R balances. This situation is called aging in financial accounting jargon and over time
leads to understatement of income.
To analyze the data, my company recommended to XYZ that it take a manageable subset
of receivables data from the vendor receivables table (BSID) of the production instance of
the SAP ERP Central Component (ECC) 6.0 system. For the purposes of the prototype, it
made the most sense to download the data dump to Microsoft Excel. Depending on the
selection criteria you have used, you might end up with a large volume of data containing
millions of rows. Because you know your data best and since this is a proof of concept
(POC), you may want to limit it to one or two fiscal years or one or two company codes.
Now that the data is downloaded, I want to make sure that sensitive information is masked
(in this case the customer name). For the actual customer POC, I keep it intact because
analyzing by customer is one of the key criteria.

Data Preparation
Because this is a prototype, its a good idea to reduce the complexity and noise that is
normally generated when you are working with a large volume of data. My experience
with such POCs is that development teams often forget the overarching goal for a POC
prove out the possibilities. In the case of this POC, I decided to limit it to 1,000 records
(e.g., 1,000 open [line] items). To make the visualization meaningful, I decided to level the
playing field by identifying line items with the same payment term. I settled for N15, which,
as the name suggests, means that the net due is within 15 days of the baseline date (and
there are no discounts for early payments).

Step 1. Launch SAP Lumira


If you have installed SAP Lumira on your desktop, you can launch it directly. Look for the
launch icon and double-click it. After you launch SAP Lumira, you are taken to the home

48

Enhance Your Visualization and Analytical Capabilities


for Accounts Receivable Aging Using SAP Lumira

page. Because SAP Lumira is a rapidly evolving product, the home page may look different
from the one shown in Figure 1.

Figure 1: The SAP Lumira home page

Step 2. Extract Data from a Spreadsheet into SAP Lumira


Your next activity is to extract data from the source and move it to the target. In this case,
the source is the Excel spreadsheet, and the target is SAP Lumira. To extract the data from
the source, click File in the main menu of Figure 1 and then click New in the list of options
in the sub-menu. You now can select the type of data you want to extract from the screen
shown in Figure 2.

Figure 2: Add a new dataset to SAP Lumira

49

Enhance Your Visualization and Analytical Capabilities


for Accounts Receivable Aging Using SAP Lumira

Because the data resides in a Microsoft Excel spreadsheet, select the first option and click
Next. This action enables you to select a file from your local drive (Figure 3).

Figure 3: Select your dataset


In the File(s) field of Figure 3, enter a descriptive name for the dataset (I blanked out the
file name to hide sensitive information). You see the entire path in this field. Notice that
SAP Lumira pulls in all the metadata as well as the entire dataset. This extract has a lot of
columns (387 to be precise), and you can get this information by clicking the Show record
count link on the right.
After you click this link, you receive a warning message about the large number of
columns. A good practice is to deselect all the fields that are not relevant to the analysis. If
you look at the BSID table in your ERP system, you notice that, other than about 15 fields,
the rest are of little value to you for the type of visual analysis you wish to do. Note also
that at this time I have not shown the records in Figure 3, but only the column names. (The
screen in Figure 3 displays numerous records. Therefore, I used a truncated version of this
screen for Figure 3.) Keep in mind these two points:

If the first row of your dataset consists of column names, keep the Set first row as
column names check box selected. Otherwise, this column is considered part of the
data and affects your visualization.

Give your dataset a meaningful name; you dont need to stick to the default.

After you select your fields (e.g., Customer ID, Amount, City, Local currency), click the
Create button (not shown) at the bottom of the screen in Figure 3. You are now in the
Prepare tab of SAP Lumira. For this prototype, there isnt anything I would recommend you
do on this view, so you can click another tab (e.g., Visualize).

Step 3. Customize and Fine-Tune the Data


The data you have imported into SAP Lumira is now available for visualization. Before
you do that, there is one small thing to doyour metadata (e.g., the column names) is
available for you. You need to do some dragging and dropping of fields (i.e., dimensions

50

Enhance Your Visualization and Analytical Capabilities


for Accounts Receivable Aging Using SAP Lumira

and measures) from the left panel to your canvas on the right. Note also that SAP Lumira is
smart enough to recognize which of these fields are attributes and which are numeric (key
figures). This is shown in Figure 4.

Figure 4: Prepare for visualization


Notice that some of the field names sound technical
and are abbreviated. As a business user, you would
probably prefer to analyze data based on field names
that correspond closely to the way you transact
business. Some cleaning up can be done. Its very
easy in SAP Lumira. Lets take the Crcy (3) field
(Figure 4) as an example. You may want to remove it
completely from your visualization since you already
have a local currency field. Highlight the field and
click the options icon
. From the context menu
that opens, click Remove (Figure 5). If you want to
rename a field rather than delete it, click Rename
and enter the new name.
Now that you are familiar with the context menu, you
can do the other clean-ups. In my case, I renamed
a few fields and deleted some superfluous ones.
This helped me put a better context to the data. For
example, if PostalCode sounds like an odd term to
you, then you can rename it; I like to use Zip Code.
Figure 6 displays the customized and cleaned-up list
of dimensions.

51

Note!
A question I was asked by a few
clients the first time I showed
them the screen shown in Figure
5 is, Why do the field names
appear the way they do? The
answer may be obvious to a
technical person, but business
users may not know that SAP
Lumira inherits these standard
field names from the original
description from the spreadsheet
to which that data was
downloaded, and this data, in
turn, came from the standard SAP
table BSID. SAP Lumira enables
you to customize these names to
meet your specific needs.

Enhance Your Visualization and Analytical Capabilities


for Accounts Receivable Aging Using SAP Lumira

Figure 5: Remove a superfluous dimension

Figure 6: Dimensions after customizing and clean-up

52

Enhance Your Visualization and Analytical Capabilities


for Accounts Receivable Aging Using SAP Lumira

Step 4. Configure Settings for Basic Visualization


SAP Lumira organizes all the attributes and puts them in the Dimensions bucket and puts
all the numeric fields into the Measures bucket. I have used customer IDs intentionally
instead of customer names in order to mask sensitive data. In the actual POC, I used
customer namesthats obviously a much more efficient analytical technique.
You want to drag at least one dimension to the X axis and one measure to the Y axis.
Because the primary purpose of this dashboard is to analyze open A/R balances per
customer, all you need to do is click the + icon and select the respective fields. SAP Lumira
is smart enough to dump the dimensions and the measures in the appropriate axes
based on the graph type selected. Because there are many different graph types that are
available, it is up to you which ones you want to select; simply click whichever type you
want. You can use SAP Lumira for experimenting. Because this analysis is essentially twodimensional, I go with the default (bar chart). Figure 7 displays the visual that is generated.
With one glance you can tell which two of your customers are the biggest laggards, and
you can take immediate action. (When you mouse over a particular bar, the tool-tip text
displays the actual balance.)

Figure 7: Dashboard displaying balances by customer ID


From this stage onward, you can let your creative genie out of the bottle (or make full
use of your drag-and-drop and clicking skills, to be precise). Another query I like to do is
open balances by year and period. Its very simple. Because a pie chart may be a better

53

Enhance Your Visualization and Analytical Capabilities


for Accounts Receivable Aging Using SAP Lumira

visualization vehicle for this, I click the pie chart icon (Figure 6) and select the year and
period as Dimensions and the amounts to be the pie sectors. The graphic shown in
Figure 8 is generated.

Figure 8: Pie chart displaying outstanding amounts by year and period


With this kind of visual at your disposal, you can make some immediate conclusions.
Period 10 of 2014 has the largest outstanding balance. The next logical step for you would
be to do some more analysis. On the right corner of the screen in Figure 8, you see a
bunch of icons. As you mouse over, the purposes of each are displayed. In this case,
the first two are relevant. You can click the first icon (the up and down arrows) to sort by
dimensions and click the second (rank) icon to add or edit a ranking by measure. Both of
them are fairly intuitive. The other recognizable icon is on the left and a little lower on the
screenit is the filter icon. The filter icon enables you to filter your results on a dimension
of your choice.
Now suppose you want to add the customer dimension to your pie chart to analyze which
of your customers had the highest outstanding balances during a certain period of a
particular year. In this case, all you need to do is to click the + icon under Dimensions in
Figure 8 and select the Customer ID dimension to be added. A new pie chart is generated,
and as you mouse over this chart, you immediately know that customer 1976177 was
the biggest violator in period 10 of 2014 with an outstanding balance of approximately
$166,000. This is shown in Figure 9.
You may now want to do some standard ranking of amounts by your current dimensions
(i.e., year, period, and customer ID). Click the rank icon. You are then presented with
options for ranking. I go with the top five as shown in Figure 10.

54

Enhance Your Visualization and Analytical Capabilities


for Accounts Receivable Aging Using SAP Lumira

Figure 9: Pie chart enhanced with a customer ID dimension

Figure 10: Create a ranked list of top amounts outstanding


Because Amount is the only measure, theres nothing to change in that field, and because,
in my example, you are interested in all the DIMENSIONS, leave the default (ALL) selected.
After you click the OK button, the pie chart shown in Figure 11 is generated.

55

Enhance Your Visualization and Analytical Capabilities


for Accounts Receivable Aging Using SAP Lumira

Figure 11: Pie chart display of the top five amounts with a combination of dimensions
You can see the legend explaining the top five segments of the pie chart conveniently
placed on the right of the dashboard. When you mouse over your pie chart, you can see
the amounts, or if you do not mind the clutter, you can set the display in the settings to
Show Data Labels.

Step 5. Configure Settings for Advanced Visualization


Now I take my analysis a level deeper. One of the key requirements for this proof of
concept is the ability to visualize the aging aspects (which in non-FI terms equates to how
long an invoice has been unpaid since its due date). XYZs finance department is interested
in strategic trends in aging rather than a line-item-level analysis. For the latter, there are
voluminous ABAP reports that are generated out of the ECC system (which is the financial
system of record). SAP Lumira helps spot strategic trends or the worst defaulters.
In your extracted data, you have both the posting date and the document date. Therefore,
create a new calculated dimension in SAP Lumira for aging in terms of number of days
elapsed between the current date and the posting date.
Now go back to the Visualize tab in SAP Lumira in Figure 9, place your cursor in the
Posting Date field, and click the options icon. In the context menu, click the Create
Calculated Dimension option as shown in Figure 12.
After you click this option, a screen appears in which you can use a variety of functions and
operators that you then assign to your calculated dimension (Figure 13).

56

Enhance Your Visualization and Analytical Capabilities


for Accounts Receivable Aging Using SAP Lumira

Figure 12: Create a new calculated dimension

Figure 13: Assign a formula to a calculated dimension

57

Enhance Your Visualization and Analytical Capabilities


for Accounts Receivable Aging Using SAP Lumira

For those of you who do analysis using some type of software package, such a screen
looks very familiar and self-explanatory. For those of you who are new, here are steps you
need to carry out:
1. In the Dimension Name field, enter a name for your calculated dimension.
2. Scroll up or down the Functions panel to identify the functions you want to use. In my
example, you need the CurrentDate() function. Double-click it and it appears in the
Formula calculation panel.
3. Add the necessary operator using your keyboard. Click the OK button.
Your calculated dimension is now added to the dimensions list. You still need to complete
a couple of steps. Aging is not really a dimension, but a measure. However, you cannot
create a calculated measure directly off dimensions. Therefore, you have to perform a
workaround by first creating aging as a dimension and then making it a measure. Position
your cursor on this new calculated dimension and click the options icon to open the
context menu as shown in Figure 14.

Figure 14: Clone a calculated measure from a calculated dimension

58

Enhance Your Visualization and Analytical Capabilities


for Accounts Receivable Aging Using SAP Lumira

After you click the Create a measure option, a clone of your calculated dimension is
created, but as a numeric entity or a measure (Figure 15).

Figure 15: Assign aggregation levels for a measure

Note!
One major advantage you have with a calculated measure is the ability to select the type
of aggregation. This is often a key component of analysis. Because calculated dimensions
are considered attributes, you cannot do any aggregation. Note also that SAP Lumira does
internal date conversions, so when you created the formula for aging, you really did not have
to worry about converting the posting date and current date to a similar format. In traditional
reporting (e.g., ABAP or SAP BW), a lot of time is expended by developers in converting from
one format to another.

You are now ready to use your new calculated measure for visualization and analysis. Do
not delete the original calculated dimension for agingif you remove it, you also lose the
calculated measure for aging.

59

Enhance Your Visualization and Analytical Capabilities


for Accounts Receivable Aging Using SAP Lumira

The flexibility of SAP Lumira allows you to experiment and learn by trial and error. In a
more traditional and rigid application, such experiments cannot be done on the fly and
making changes would be time-consuming. For example, if you want to use aging as a
dimension instead as a measure, and display elapsed days exactly as they are, select Aging
(Days) from your list of dimensions instead of measures. This time, use a line chart and
select measures (Amount) and dimensions (Aging (Days) and Customer ID) as shown in
Figure 16.

Figure 16: Aging analysis by days and customer

You can see the aging range as well as the various trends. You also see at first glance that
there are a few customers that have invoices that have aged for a combined 1,550+ days.
You also see one big cumulative amount outstanding ($165,000) for customer 1976177 for a
cumulative 265 days. n
Anurag Barua is an independent SAP advisor. He has 23 years of experience in conceiving,
designing, managing, and implementing complex software solutions, including nearly 18 years
of experience with SAP applications. He has been associated with several SAP implementations
in various capacities. His core SAP competencies include FI and Controlling FI/CO, logistics, SAP
Business Warehouse (SAP BW), SAP BusinessObjects, Enterprise Performance Management, SAP
Solution Manager, Governance, Risk, and Compliance (GRC), and project management. He is a
frequent speaker at SAPinsider conferences and contributes to several publications. He holds a BS
in computer science and an MBA in finance. He is a PMI-certified PMP, a Certified Scrum Master
(CSM), and is ITIL V3F certified.

60

SAVE $200 ON YOUR MEMBERSHIP


WHEN YOU ENROLL HERE!

How to Use SAP BPC with


SAP Cloud for Analytics
by Paul Davis, Vice President, VantagePoint Business Solutions
Graylin Johnson, Director of Enterprise Financial Analytics and Enterprise Performance
Management (EPM), Tory Burch
SAPs Cloud for Analytics is the companys software-as-a-service solution (SaaS) for the
financial planning and analysis space. Built from the ground up to be delivered in the
cloud, SAP Cloud for Analytics takes best practices from the SAP Business Planning and
Consolidation (BPC) and SAP Business Warehouse Integrated Planning (SAP BW-IP) suites.
It builds those capabilities into a tool that is focused primarily on planning.
SAP Cloud for Analytics deployment interface combines visualizations and dashboards
and event-based process management with embedded collaboration capabilities. With
a modicum of training, planners are able to develop custom reports, dashboards, and
storyboards. As users become more knowledgeable with the tool, they can build their own
custom models, either as private versions or as shared public versions.
For existing BPC customers, this new product release has led to questions on the use
cases for SAP Cloud for Analytics with regard to how it might supplement or integrate with
BPC. SAP Cloud for Analytics provides a solution for both new SAP customers as well as
existing BPC users, delivering analytics and visualizations without sacrificing features of the
legacy BPC solutions.
SAP Cloud for Analytics was designed around three key tenets: future oriented, people
centric, and analytics embedded. To deliver on these tenets, SAP Cloud for Analytics was
designed as a modeling tool so that users can quickly build models based on source data
files, extracts from ERP or business intelligence systems, or integration with BPC. Once
those models are built, the tool comes into play with visualizations (Figure 1).
Through drag-and-drop development tools, users can quickly define reports, charts,
graphs, and dashboards. SAP Cloud for Analytics enhances Enterprise Performance
Management (EPM) to deliver these reports and dashboards in a storyboarding paradigm.
The reports and visualizations are linked for click-by-click analysis, leading into deeper
analysis of results. These visualizations can be personalized based on user preferences
around formatting or perspective (dimensional) contexts that follow from object to object.
A collaboration panel embeds directly within the interface. Collaboration tools allow users
to communicate changes or questions on forecast direction and results to peers in real
time. The visualizations and storyboards can be embedded within the chat window as a
link (Figure 2). This feature means that users no longer need to send large spreadsheet
files or advise someone to go out to find a specific report, open it, navigate to a specific

61

How to Use SAP BPC with SAP Cloud for Analytics

Figure 1: SAP Cloud for Analytics dashboards


dimensional perspective, analyze
the values, and email back
commentary. The visualization link
within the SAP Cloud for Analytics
Collaboration chat window
allows users to directly open the
report/chart/dashboard to the
precise dimensional intersection
in question, with associated
commentary embedded in the
Collaboration pane.

Figure 2: SAP Cloud for Analytics Collaboration pane

62

The third key feature of SAP


Cloud for Analytics is the events
interface. SAP Cloud for Analytics
Events takes the capabilities of
BPC business process flows and
converts them into a calendar
visual, allowing plans to be set
up by processes and also to
embed time-based events, due
dates, commitments, alerts,
and notifications. Reports,
dashboards, and visualizations
can all be embedded as objects
within the calendar.

How to Use SAP BPC with SAP Cloud for Analytics

Because SAP Cloud for Analytics is delivered on the SAP HANA engine, calculation
capabilities allow for quick processing and the potential to manage large data volumes and
complex logic. Initial financial intelligence has been delivered with the application and more
will continue to be added over time. SAP has delivered allocation calculations, including
standard source-target allocations, cost pools, spreading, seasonality, and cell locking.
Finally, SAP Cloud for Analytics has been released with not only a standard web interface
but also a mobile app. Notifications, events, and collaboration features are all covered in
the initial release of the SAP Cloud for Analytics mobile app.
From a technical perspective, the key to understanding SAP Cloud for Analytics is that the
underlying database is SAP HANA. At base, this hosted solution provides companies with
the performance and scalability benefits of SAP HANA without requiring the cost, time,
and effort to acquire an SAP HANA appliance (if one does not exist in your organization).
As with any SaaS solution, the acquisition of data is always a question. Many of the legacy
cloud-based EPM solutions have been challenged with acquiring and formatting data
for their systems. SAP Cloud for Analytics excels in this area, having been delivered with
data connectors for multiple source types: flat file uploads, SAP Business Warehouse (SAP
BW) model extractors, BEx query connections, SAP HANA view connectors, and BPC
connectors. In the case of the BPC and SAP HANA connectors, the integrations can be
established for one-way or bidirectional movement of data. This is the very basis of the
hybrid BPC-SAP Cloud for Analytics solution.

Use Cases for SAP Cloud for Analytics


Following are common examples of users who would benefit from implementing SAP
Cloud for Analytics.
SAP Cloud for Analytics offers these features for functional users:

63

Extend participation in the plan/forecast development by inclusion of line of business


heads (revenue, margin, or headcount planning), the sales organization (revenue
planning), operations/distribution/merchandising (cost planning), location/store (point
of sale [POS] integration, payroll, inventory), and more.

Many existing BPC users are looking to maintain their existing planning models in
BPC for corporate planning to take advantage of BPC mature financial intelligence.
However, for detailed planning for revenue, bill of material costing, or headcount
planning and the like, SAP Cloud for Analytics allows for autonomous yet integrated
planning capabilities. These new departments/regions/companies can model forecasts
based on their unique details and bring the results back into the corporate planning
solution housed in BPC.

Because of acquisition or expansion, there may be a need to extend planning to a new


company or a new geography. SAP Cloud for Analytics allows these new entities to be
spun up more quickly with custom modeling capabilities for their unique business needs.

How to Use SAP BPC with SAP Cloud for Analytics

SAP Cloud for Analytics offers these features for technical users:

Global planning scenarios in which there are environmental issues with standardization
of Microsoft Excel versions or challenges with network bandwidth and performance

BPC, version for Microsoft users who want to leverage the performance and scalability
of SAP HANA can add SAP Cloud for Analytics and set up BPC-Microsoft as a source
connection.

BPC, version for NetWeaver users who also want the performance and scalability
of SAP HANA, but have chosen not to make the investment in the SAP HANA
infrastructure at present either due to cost or to concerns about the time and effort to
move in-place, mission-critical SAP BW/BI/BPC systems.

Organizations with limited IT capacity to implement or manage a new planning system.

Setting Up BPC to SAP Cloud for Analytics Integration


Now we want to detail the technical considerations for setting up the integration between
BPC and SAP Cloud for Analytics.

SAP HANA Cloud Connector


First, you need to install the SAP HANA Cloud Connector. This happens to be the
more involved portion of the integration process. The SAP HANA Cloud Connector
is a download from SAP that must be installed in the on-premise source side of the
integrationin this case, your BPC environment. Note that there are unique requirements
and steps related to Java tools that must be taken for BPC, version for Microsoft versus
the BPC, version for NetWeaver. For more information go to https://help.hana.ondemand.
com/help/57ae3d62f63440f7952e57bfcef948d3.html.
SAP HANA Cloud Connector serves as the link between applications on the SAP HANA
Cloud Platform (in this instance, SAP Cloud for Analytics), and existing on-premise systems
(BPC for our particular use case). Once set up, the SAP HANA Cloud Connector runs as an
agent on the on-premise environment, acting as a secured reverse invoke proxy between
the SAP HAP Cloud Platform and the on-premise network (Figure 3).
Because SAP built the SAP HANA Cloud Connector with reverse invoke support, there is
no need to configure the firewall of the on-premise solution to allow external access from
the cloud system. Rather than requiring the opening ports in the firewall and using reverse
proxies to establish access to the on-premise systems, the SAP HANA Cloud Connector
can be used to connect the on-premise data to SAP HANA databases sitting in the cloud
directly, without requirements to open inbound ports.
As depicted in Figure 3, the SAP HANA Cloud Connector maintains that proxy connection
between on-premise data sourcesbe they SAP BW, SAP HANA, or BPC and the data
store on the SAP HANA Cloud platform.

64

How to Use SAP BPC with SAP Cloud for Analytics

SAP HANA Cloud Platform


SAP Cloud for Planning

Your Model
Any Device

HANA Cloud Connector

Your SAP BPC

Your On-Premise Environment

Figure 3: The SAP HANA Cloud Connector

Set Up the Connection

Figure 4: SAP Cloud for Analytics connection


creation window

65

With the SAP HANA Cloud


Connector installed on your
BPC environment, the second
step is to create a connection
to that environment on SAP
Cloud for Analytics. Here
you establish connectivity
parameters, server to server. To
create the connection in SAP
Cloud for Analytics, you need
the host name, port, and a
valid authentication (a user ID
and password with appropriate
access rights), as shown in Figure
4. This process establishes the
server-to-server connection
between the data source
environment and SAP Cloud for
Analytics. To save the entries,
click the Create button.

How to Use SAP BPC with SAP Cloud for Analytics

In the SAP Cloud for Analytics menu (the Cloud for Analytics navigation is driven from an
application menu found in the upper left side), select Connection (Figure 5). A list of valid
connections is provided. Click the + icon. Choose the Create Connection from BPC. This
action returns you to the interface in Figure 4.
The third step in the process is to define the specific integration. This bidirectional
connection manages the movement of data and master data between BPC and SAP Cloud
for Analytics. The connection works with SAP BPC 10.x, version for Microsoft and SAP BPC
10.x, version for NetWeaver.
From the SAP Cloud for Analytics menu, go to Modeler and then choose Import Data from
BPC. This action takes you to the screen shown in Figure 6. If you are creating a brandnew SAP Cloud for Analytics model from BPC, you instead choose Import Model from
BPC. In both cases, the new Connection window opens.
Select the Connection to the
BPC server that you just set up.
Pick an environment from the list
of supported environments on
that connected server. Select the
model to be extracted. In this
case, the model represents the
BPC application to be used. You
can align standard, embedded,
or Microsoft types to the BPC
source system.

Figure 5: The SAP Cloud for Analytics menu

66

Set up mappings of dimensions


between the BPC model and
the SAP Cloud for Analytics
model. By default, SAP Cloud
for Analytics creates a mapping
based on BPC dimension
and SAP Cloud for Analytics
perspective types (such as
Account, Time, Organization,
Version, or Generic). Alternately,
you can manually define the
mappings of BPC dimensions
to SAP Cloud for Analytics
perspectives, even ignoring
some BPC dimension from the
integration.

How to Use SAP BPC with SAP Cloud for Analytics

Finally, you can apply filters to


limit what data comes over from
BCP to SAP Cloud for Analytics.
For example, you can import
a BPC model with just actual
data for the EMEA region for
the current and prior year by
selecting those points in the
three associated BPC dimensions
(Version, Entity, Time).
Click the filter icon next to the
specific BPC dimension. A
window opens allowing you to
navigate through the dimensional
hierarchy to select the desired
BPC member (Figure 7).
Figure 6: Import data from BPC

As a note, in addition to pulling


over dimensions, members,
and data, BPC business process

Figure 7: Filter the Import file to SAP Cloud for Analytics

67

How to Use SAP BPC with SAP Cloud for Analytics

flows can also be imported and converted to Events in the Cloud for Analytics calendar.
This import can be achieved either as part of the full import of an SAP BPC model, or on
an event-by-event basis. To manually pull over BPC Business Process Flows to Cloud for
Analytics events, click the import event icon (Figure 8) at the top of the Events interface.

Figure 8: Click the import icon


Now fill in the parameters listed in the pop-up window that appears (Figure 9). We
presume that you have previously set up the connection to BPC.

Figure 9: Import events from BPC


Conversely, BPC Script logic, Business Rules, and Member Formulas are not migrated
directly to SAP Cloud for Analytics from BPC. They must be rebuilt in SAP Cloud for
Analytics using the syntax of this new tool.

68

How to Use SAP BPC with SAP Cloud for Analytics

Using this Connection interface, you can perform the following integration routines:

Create a model from BPC: A new SAP Cloud for Analytics model would be generated
from the existing BPC cube, with all appropriate dimensional mappings and filtering
applied. This is key to establishing a hybrid BPC-SAP Cloud for Analytics solution.

From the Cloud for Analytics menu, select Modeler

Click the Import button and select Import Model from BPC. Define the parameters for
the connection and click Create. To define mapping of BPC dimensions to Cloud for
Analytics perspectives, click the Edit Mapping button (Figure 10).

Figure 10: Import Model from BPC


After you click the Edit Mapping button, the system displays a list of import and export
mapping definitions (Figure 11).

Figure 11: Import and export mapping definitions

69

How to Use SAP BPC with SAP Cloud for Analytics

Load from BPC: Sets up the movement of BPC data to an existing Cloud for Analytics
model. Can be applied to either a model created from BPC, or a manually built model.
Select Modeler from the Cloud for Analytics menu (Figure 5).

Click the Import button in the upper right of the screen and select Import Data from
BPC. Define the parameters for the connection and click Create. To define mapping of
BPC dimensions to Cloud for Analytics perspectives, click the Edit Mapping button.

This interface is consistent with the Import Model from BPC interface.

Load to BPC: Follows an inverse process of moving data from Cloud for Analytics to
BPC. As with the load or create process, the user can filter which perspectives and
members of data will be moved back to BPC. Again as an example, you can choose
to send back to BPC the Scenario MyForecast, mapped to the BPC Category
dimension member Working, and just the data for the Organization EMEA (and all
its descendants), for the year 2016.
Select Modeler from the Cloud for Analytics menu (Figure 5).

Click the Export button in the upper right of the screen and select Export Data
to BPC. Define the parameters for the connection and click Create. To define
mapping of BPC dimensions to Cloud for Analytics perspectives, click the Edit
Mapping button.

This interface is also consistent with the Import Model from BPC interface.

Other integrations: While the hybrid approach to integrating BPC and SAP Cloud
for Analytics is the focus of our article, Cloud for Analytics has the functionality to
set up additional integration types (depending on the Connector type) such as SAP
HANA views, SAP BW objects, BEx queries, and flat files.

All integrations can be scheduled to run at regular intervals. If you navigate to Connection
> Import Status, you can see where to define scheduling parameters (Figure 12). Import
or export of data for SAP Cloud for Analytics can be run on an ad hoc (run now) basis or be
scheduled for recurrence. Recurrence is managed through the SAP HANA engine and can
be set for Daily, Weekly, or Monthly with full granularity of definition (for example, every
Monday at 2:00 a.m.). EST starting 10/1/2015 and ending on 12/31/2016.

Figure 12: Import status

70

How to Use SAP BPC with SAP Cloud for Analytics

Also of note to people interested in a BPC-SAP Cloud for Analytics hybrid solution is
Cloud for Analytics backup and recovery capabilities. By navigating via the menu to
Deployment | Export, users can back up a model to be moved between environments,
including between SAP HANA environments. The backup/restore process moves the
entire SAP Cloud for Analytics model or models and all associated objects including
Perspectives, Members, Events, Roles, KPI calculations, and Reports. n
Paul Davis is a vice president at VantagePoint Business Solutions.
Graylin Johnson is director of enterprise financial analytics and enterprise performance
management (EPM) at Tory Burch. He is an expert in the SAP Business Planning and Consolidation
(SAP BPC) solution and has led several successful implementations for an array of industries. He
holds a BS in finance and has more than five years of experience in financial and business analysis.
Currently, Graylin is focusing on EPM innovation by developing new use cases for SAP BPC as well
as leveraging predictive analytics to increase forecasting accuracy and automation.

71

SAVE $200 ON YOUR MEMBERSHIP


WHEN YOU ENROLL HERE!

Vous aimerez peut-être aussi