Académique Documents
Professionnel Documents
Culture Documents
Mike OBrien
Mike.obrien@wispubs.com
+1-781-751-8799
Contents
5
22
The main classes and methods needed for programming infotype updates
The detailed steps for inserting new records in standard and custom infotypes
How to change certain fields for an existing infotype record for employees
40
Explore SAP HANA Live functionality and how it aids in real-time reporting
Learn how to build custom SAP HANA views by extending pre-delivered SAP HANA Live views
Become familiar with the SAP HANA views and their role in reporting
Contents
48
61
Use cases from both technical and functional perspectives on when to implement SAP
Cloud for Analytics
How to set up a connection between SAP Cloud for Analytics and SAP Business
Planning and Consolidation (BPC)
How to set up integration routines, both for ad hoc and recurring extracts and
retractions
Method name
Purpose
READ
Reads a set of records from an infotype table. For the DELETE and
MODIFY methods, you first need to fetch the corresponding record.
INSERT
DELETE
MODIFY
Note!
Any dynamic actions associated
with the infotype that is being
updated are not executed
automatically by programs that
use the steps mentioned in
this article. You must program
the dynamic actions to be
executed manually. In addition,
any custom-specific validation
checks written in a function exit
for enhancement PBAS0001 are
not called when the methods
mentioned are called.
data
Next, assign appropriate values to the fields of structure WA_P0006 (Figure 2). You must
ensure that all the required fields for the infotype are specified in this block of code. (Make
sure the wa_p0006-infty field is assigned the number of the infotype you are dealing with,
in this case, 0006.)
wa_p0006-pernr
wa_p0006-infty
wa_p0006-subty
wa_p0006-endda
wa_p0006-begda
wa_p0006-anssa
wa_p0006-stras
wa_p0006-ort01
wa_p0006-pstlz
wa_p0006-land1
=
=
=
=
=
=
=
=
=
=
1006.
0006. important partdo not omit
1.
99991231.
20160101.
1.
Burj Al Khalifa.
Doha.
48.
QA.
data lr_container
lr_masterdata_bl->get_infty_container(
exporting
tclas
= A
pskey
= wa_p0006-pskey
no_auth_check
= X
message_handler = message_handler_obj
importing
container
= lr_container
is_ok
= is_ok ).
Note!
You must ensure that the
provided key is the same one
that was used for retrieving the
infotype container in step 3. In
this case, I filled the fields of the
key as shown in Figure 2. If this is
not done, a short dump occurs.
lr_masterdata_bl->insert(
exporting
no_auth_check
= space
message_handler = message_handler_obj
importing
is_ok
= is_ok
changing
container
= lr_container ).
10
Note!
There are two forms of
typecasting. When the static type
of the source variable is more
general than the static type of the
destination variable, it is known as
downcasting. On the other hand,
upcasting occurs when the static
type of the source reference
variable is more specific than,
or the same as, the static type
of the destination variable. The
special casting operator ?= can
be used in both cases. For more
information about casting, refer
to this SAP Help link: https://
help.sap.com/abapdocu_750/en/
abapmove_cast.htm.
Once the changes are done and committed to the database you need to call the static
method DEQUEUE_BY_PERNR of the CL_HRPA_MASTERDATA_ENQ_DEQ class. Here
you pass the employee number as a parameter (Figure 10).
cl_hrpa_masterdata_enq_deq=>dequeue_by_pernr(
exporting
tclas = A
pernr = wa_p0006-pernr ).
lr_masterdata_bl->get_infty_container(
exporting
tclas
= A
pskey
= wa_p0006-pskey
no_auth_check
= X
message_handler = message_handler_obj
importing
container
= lr_container
Figure 11: The complete code for insertion (continued on next page)
11
is_ok
= is_ok ).
check is_ok eq X.
data lr_container_data type ref to if_hrpa_infty_container_data.
if is_ok is not initial.
lr_container_data ?= lr_container.
downcasting to IF_HRPA_INFTY_CONTINER_DATA
lr_container ?= lr_container_data->modify_primary_record( wa_
p0006 ) .
endif.
lr_masterdata_bl->insert(
exporting
no_auth_check
= space
message_handler = message_handler_obj
importing
is_ok
= is_ok
changing
container
= lr_container ).
if is_ok is not initial.
lr_masterdata_bl->flush(
exporting
no_commit = space ).
endif.
cl_hrpa_masterdata_enq_deq=>dequeue_by_pernr(
exporting
tclas = A
pernr = wa_p0006-pernr ).
12
Note!
For simplicitys sake, I show how
to delete a single record. This
code can be adapted to suit
your users requirements. In
addition, the steps for locking
and unlocking employee records
(described in the previous
section) are necessary, but have
been omitted from this section.
Refer back to the first section for
details about how to do this.
Step 1
DATA a_masterdata_bl TYPE REF TO if_hrpa_masterdata_bl.
cl_hrpa_masterdata_bl=>get_instance(
IMPORTING masterdata_bl = a_masterdata_bl ).
13
Step 2
DATA message_handler TYPE REF TO cl_hrpa_message_list.
create object message_handler.
Step 3
DATA container_tab TYPE hrpad_infty_container_tab.
a_masterdata_bl->read(
exporting
tclas
= A
pernr
= 1006
infty
= 0006
subty
= 1
objps
=
sprps
=
mode
= 4
seqnr
= 000
begda
= 20160101
endda
= 99991231
no_auth_check
=
message_handler = message_handler
importing
container_tab = container_tab ).
14
Note!
In the example code, I have
supplied a value of 4 for the
MODE parameter in the READ
method call. The MODE
parameter may be passed a
number of values in addition
to the value 4. Check the
other permissible values that
correspond to the parameter
via the method signature to
see if they may better suit your
requirements.
For this step, you need to know which specific row within the CONTAINER_TAB internal
table corresponds to the infotype record to be deleted. A field symbol <fs> is defined for
pointing to this line in the internal table CONTAINER_TAB (Figure 16). The READ TABLE
statement is then used to assign its reference to the field symbol <fs>. (For simplicitys
sake, lets assume that the row to be deleted is the first row of the container table.)
Step 4
FIELD-SYMBOLS <fs> LIKE LINE OF container_tab.
READ TABLE container_tab INDEX 1 ASSIGNING <fs>.
15
Step 5
CHECK sy-subrc EQ 0.
DATA is_ok TYPE boole_d.
a_masterdata_bl->delete(
EXPORTING
container
= <fs>
no_auth_check
=
message_handler = message_handler
IMPORTING
is_ok = is_ok ).
step 6
data messages_tab type hrpad_message_tab.
IF is_ok IS INITIAL.
message_handler->get_message_list(
importing
messages = messages_tab ).
ELSE.
a_masterdata_bl->flush(
EXPORTING
no_commit = space ).
ENDIF.
16
Steps 13. Call the GET_INSTANCE Method, Create the Message Handler,
and Call the READ Method
The first three steps are the same as shown previously in the previous section of this article.
Declare a reference variable pertaining to the interface IF_HRPA_MASTERDATA_BL,
and call the static method GET_INSTANCE of the class CL_HRPA_MASTERDATA. Then
declare an internal table CONTAINER_TAB (based on Dictionary type HRPAD_INFTY_
CONTAINER_TAB). You also declare a reference to the class CL_HRPA_MESSAGE_LIST.
Then create an object MESSAGE_HANDLER using the CREATE OBJECT statement.
Next, you call the READ method for the interface IF_HRPA_MASTERDATA_BL. This
method allows you to read the record that is to be changed. The read data is returned in
the CONTAINER_TAB you declared earlier. The code written so far is shown in Figure 19.
Step 1
DATA a_masterdata_bl TYPE REF TO if_hrpa_masterdata_bl.
cl_hrpa_masterdata_bl=>get_instance(
IMPORTING masterdata_bl = a_masterdata_bl ).
Step 2
DATA message_handler TYPE REF TO cl_hrpa_message_list.
create object message_handler.
DATA container_tab TYPE hrpad_infty_container_tab.
Step 3
a_masterdata_bl->read(
EXPORTING
tclas
= A
pernr
= 1006
infty
= 0006
subty
= 1
objps
=
sprps
=
mode
= 4
seqnr
= 000
begda
= 20160101
endda
= 99991231
no_auth_check
=
message_handler = message_handler
IMPORTING
container_tab = container_tab ).
Figure 19: The first three steps for the program to change the infotype
17
As you can see, the row of the infotype 0006 for employee 1006 has subtype 1, with start
and end dates of 01.01.2016 and 31.12.9999, respectively. Any messages generated as
a result of the method call are returned via the variable MESSAGE_HANDLER. After the
method is executed, the data read is contained in the container tab CONTAINER_TAB.
Step 4
FIELD-SYMBOLS <fs> LIKE LINE OF container_tab.
READ TABLE container_tab INDEX 1 ASSIGNING <fs>.
Step 5. Get the Existing Infotype Row and Specify the Fields to be
Changed
Before calling the MODIFY method, you need two container references. The first reference
must point to the original container (containing the existing infotype record) and the
second must refer to the container of the record representing what the record looks after
the change has been made.
Step 5 a)
CHECK sy-subrc EQ 0.
DATA is_ok TYPE boole_d.
data lr_container_data type ref to if_hrpa_infty_container_data.
data changed_record_data type ref to if_hrpa_infty_container.
lr_container_data
?= <fs>.
18
The field symbol <fs> points to the existing container record. Using this, form the
container for the changed record, the reference of which is stored in the CHANGED_
RECORD_DATA variable. Define two references, LR_CONTAINER_DATA and CHANGED_
RECORD_DATA, to the interfaces IF_HRPA_INFTY_CONTAINER_DATA and IF_HRPA_
INFTY_CONTAINER, respectively.
Assign the field symbol <fs> to the LR_CONTAINER_DATA variable.
To get the contents of the currently stored record in the infotype, call the PRIMARY_
RECORD_REF method using the LR_CONTAINER_DATA variable (Figure 22). The
retrieved row is stored in WA_P0006.
Step 5
b)
Step 5
c)
19
Finally, the MODIFY method is called (Figure 24). It is passed appropriate values pertaining
to parameters OLD_CONTAINER (current values) and CONTAINER (modified values).
Step 5
d)
a_masterdata_bl->modify(
exporting
old_container
= <fs>
no_auth_check
=
message_handler = message_handler
importing
is_ok
= is_ok
changing
container
= changed_record_data ).
step 6
data messages_tab type hrpad_message_tab.
IF is_ok IS INITIAL.
message_handler->get_message_list(
importing
messages = messages_tab ).
ELSE.
a_masterdata_bl->flush(
EXPORTING
no_commit = space ).
ENDIF.
20
Once the code is executed, it modifies the record for employee 1006, subtype, with
BEGDA and ENDDA equal to 01.01.2016 and 31.12.9999, respectively, from the database.
As you can see in Figure 26, the Street and House No field is successfully changed to
Salwa Road. n
Rehan Zaidi is a consultant for several international SAP clients (both on ite and remotely) on a wide
range of SAP technical and functional requirements, and also provides writing and documentation
services for their SAP- and ABAP-related products. He started working with SAP in 1999 and writing
about his experiences in 2001. Rehan has written several articles for both SAP Professional Journal
and HR Expert, and also has a number of popular SAP- and ABAP-related books to his credit. You
may reach Rehan via email at erpdomain@gmail.com.
21
Real-Time Operational
Reporting with SAP
HANA Live
by Dr. Bjarne Berg and Brandon Harwood
SAP HANA is a combination of hardware and
software that optimizes database technologies to
exploit the speed of fast in-memory processing and
parallel processing capabilities of multi-core systems.
SAP HANA Live (formerly known as part of the
Composite Analytic Framework, or CAF) leverages
this technology with hundreds of pre-delivered SAP
HANA Live views. These additional views enable
companies to quickly start developing real-time
operational reporting on top of transactional data
from the SAP Business Suite or SAP S/4HANA
transactional systems, without having to extract and
move the operational data to data warehouses.
Note!
Before beginning to use SAP
HANA Live, companies should
familiarize themselves with SAP
HANA studio from a modeler
perspective. They can do this
by sending users to either SAP
Educations three-day HA300
Implementation and Modeling
class or the two-day HA900 SAP
HANA Live class. These classes
are fundamental since SAP HANA
studio is the primary interface
used to administrate, model, and
maintain the SAP HANA or SAP
HANA Live systems.
Attribute Views
Attribute views consist of one or more tables and are used to qualify the data in some
way. Attribute views are the basic building blocks in the SAP HANA studio modeler. These
views are reusable and are somewhat comparable to dimensions and master data in SAP
Business Warehouse (BW). Most attribute views are built on master data, but are not
technically restricted to this data.
22
When you build an attribute view, you probably want it to conform to many types of
transactions. For example, if you build an attribute view of customer data, you want this
view to contain enough meaningful fields so that it can be joined to other data foundations
or transactions such as sales orders, billing, and payments. In other words, the attribute
view can be reused in ways that simplify the development of analytical views.
In general, attribute views typically contain text, but they can be built to include many
different tables. For example, in Figure 1, customer data is being joined to the sales
organization and the country data tables to get a more complete view of customers.
23
Analytic Views
Analytic views bring transactional data and attribute views together. Typically, this involves
dragging one or more attribute view into the logical join in the Scenario pane and then
adding transactional data to the Data Foundation. After you complete this step, you can
join the attribute, the views, and the data in the Data Foundation by clicking and dragging
the fields you want to join from the various views and tables. Most people quickly find this
very intuitive. For example, in Figure 2, the products and customer attribute views are
being joined with sales orders to create analytical views.
Calculation Views
The calculation view is the foundation of SAP HANA Live. Actually, SAP HANA Live is
based on VDMs. These models are composed of several reusable calculation views that
can be combined with both attribute and analytic views. These calculation views combine
several analytic views (with many fact tables) into one reportable source. For example, you
can see an illustration of the basic principles of a calculation view in Figure 3.
A fundamental benefit of this arrangement is that calculation views that make up VDMs can
be modified and extended to include custom fields and tables as necessary. For example, if
you have added a new Z-field or table in the SAP Business Suite, they will not be found in the
standard SAP HANA Live calculation views. These have to be added as an extension.
24
25
from the context menu. This action displays all of the underlying fields where you can
select those that need to be included in the reuse and query view outputs (Figure 5).
26
used for reporting. As of October 2015, there were 242 query views available for reporting
straight out of the box when installing SAP HANA Live.
27
It is important to note that some of the newer SAP data visualization and reporting tools
(such as SAP Lumira) have native connectivity to SAP HANA and can skip the need to
connect through a BusinessObjects universe, allowing you to connect directly to the views.
This direct connection to SAP HANA query views can allow for less latency in reporting,
which means even faster front-end execution of reports and visualizations. However,
other more traditional BusinessObjects tools normally consume the SAP HANA Live views
through a universe.
As of SAP BusinessObjects Business Intelligence (BI) 4.1, SAP has included a new option in
the Information Design Tool (IDT) to allow you to directly convert an SAP HANA view into
a deployable universe. This option dramatically increases the speed of the deployment of
the views into BI. Most of the hundreds of query views can be exposed in as little as a few
days work, making the deployment of SAP HANA Live with BusinessObjects tools very
efficient. As we look at the step-by-step process for deploying the views into a universe, it
is important to note that this feature is only available in the 4.1 and higher release of the
IDT.
Figure 7: Add a new business layer in the IDT for SAP HANA Live-based universes
You are prompted to give the business layer a name as well as to define the name of the
data foundation that is being created. We recommend that the suffix BL or DF be added
to the end of the business layer and data foundation names, respectively. These suffixes
help you quickly distinguish these different universe components.
28
Click the Next button and, in the screen that opens (Figure 8), select the applicable
connection to your SAP HANA Live system. Notice here that this connection is a .cnx type.
CNX connection types are local connections that are not stored in the BusinessObjects
connection repository. This connection type cannot be used in reports that are published
to the BusinessObjects BI 4.1 platform since users cannot be authorized. Therefore, you
need to change the connection type from .cnx to the secure connection type .cns instead.
29
Figure 10: Connect the business layer in the IDT to an SAP HANA Live query view
After you have completed the connection change, right-click your local project and select
New HANA Business Layer from the context-menu options. Next, with this connection
established, select the SAP HANA Live view on which you would like to create the
universe. In our example, we want to connect to the pre-delivered SAP HANA Live query
view called SalesOrderQuery that we found in the SAP HANA browser (Figure 10). Select
the view and click the Finish button to complete the creation steps for SAP HANA business
layer for your universe.
Publishing a Universe
The next step is to publish this new universe to the BusinessObjects repository so that the
developers and power users can access it directly. To complete this step, right-click the
new SAP HANA business layer and select Publish > To a repository . After publishing is
completed, this universe, which is based on the SAP HANA Live query view, is available
for consumption in any of the SAP BusinessObjects reporting tools by power users and
developers (Figure 11).
This method of exposing SAP HANA query views is an efficient way of moving this data to
the reporting tools, but it does not come without some custom configuration. By default,
the universe settings do not enable query stripping and limit the row count that can be
returned by the universe. It is therefore very important that you change these settings in
the query properties section in the IDT (Figure 12). Select the two check boxes as shown
in the Query Options section of Figure 12 and press Enter to save.
30
Figure 11: Publish a universe based on an SAP HANA Live view to a BusinessObjects repository
Figure 12: Change the query settings in the IDT for SAP HANA Live view-based universes
31
Figure 13: Connect to SAP HANA and select a new source system
Once you have selected the SAP HANA option, you need to log in to the system with your
access credentials (user name and password). (These are the credentials you got from your
security team that monitors and administers the SAP HANA system in your organization.)
Then click the Next button, which opens the screen in Figure 14.
Next, select the SAP HANA view you want to use as the basis of your Lumira analysis
(Figure 14). In this case, use the same SalesOrderQuery view that you built the
BusinessObjects universe on previously to illustrate the query views versatility.
After you select the view, Lumira displays the dimensions and measures associated with
that view (Figure 15). By default, all dimensions and measures are selected for use in the
report. However, these dimensions and measures can be deselected, if necessary.
After you have selected the fields to use in your data visualizations in Lumira, click the
Create button to complete the addition of the new SAP HANA Live dataset. The dataset
32
Figure 14: Add a connection in Lumira to the SAP HANA Live views
Figure 15: Select fields from the SAP HANA Live views to use in Lumira
is now accessible within Lumira where it can be manipulated as if it were a universe. This
means that all normal functionality, such as custom calculated fields, custom hierarchies,
and other data formatting options, are available.
It is important to note that if a calculation is used by many users and is consistent across
the organization, it is often better to add new calculated fields directly in the SAP HANA
Live view instead of in Lumira. This is because of the much faster speed offered by SAP
HANA relative to application servers.
33
This method of connecting to the views from SAP HANA Live streams and provides access
to the data in real time. As a result, data is updated after every refresh directly from the
transactional data in the Business Suite on SAP HANA. Users now have the benefits of fast
performance merged with the simplicity of graphically analyzing data in Lumira. They also
can add from a vast number of fields now exposed to the front-end tool (Figure 16).
Figure 16: Access real-time data from SAP HANA Live views in Lumira
34
35
After you select your view, it is important to take note of the views Data Category.
The views Data Category needs to be set as a Cube to ensure it is visible to the
BusinessObjects reporting tools (Figure 19).
36
To complete the process of adding the new field, you need to add a new join to the view.
Click the join icon (boxed in red in Figure 21), then drag and drop the new SalesDistrict
field into the join.
After you add this join to the view, it needs to be reconnected to the data flow and then
propagated to the semantics layer. The join type is also defined in this step. This includes
the join cardinality as well as the join type, in the case, a left outer join. Since only a single
field is being adding to the view, the join consists of only two items, the SAPClient and
SalesDistrict (Figure 22).
Now that the SalesDistrict join is complete, you need to propagate the new field to the
semantics layer. Right-click the SalesDistrict field in the join and select Propagate to
Semantics from the context-menu options (Figure 23). When prompted, select the OK
button to confirm that that new field has been propagated upward to the aggregation and
semantic layers.
The view has been extended with the new field from the transaction system. It can now be
consumed in the BusinessObjects reporting tool suite after it has been published in the
IDT to a universe or to Lumira via direct SAP HANA connections.
37
38
Title of article
SAP HANA Live is a new product offering for most organizations using SAP software.
However, the potential use of the content in the views it provides is far reaching. Some
organizations may simply choose to push most of their real-time operational reporting into
this tool, thereby reducing the need for moving all operational data into SAP BW.
It also reduces the need for faster ETL processes to get access to real-time analytics.
The data is not moved at all and stays inside the transaction system. In other words,
the EDW can become what it was intended to be: a platform for planning, budgeting,
forecasting, consolidation, summarized management reports, and what-if analysis. At the
same time, operational reporting goes back to the transaction system where it belongs.
In other words, with SAP HANA Live you can take advantage of the dramatic performance
improvements of the SAP HANA database while simplifying the reporting landscape,
thereby reducing data latency between systems and potentially shrinking the footprint of
many EDWs. n
Dr. Bjarne Berg is a Principal and the Tax Data Analytics and Business Intelligence Leader in
Tax Technology Compliance (TTC) at PricewaterhouseCoopers (PwC), LLP. He is responsible for
analytics and go-to-market strategy. Dr. Berg is an internationally recognized expert in BI and a
frequent speaker at major BI and SAP conferences world-wide, with over 20 years of experience
in consulting. He regularly publishes articles in international BI journals and has written five books
on business intelligence, analytics, and SAP HANA. Dr. Berg attended the Norwegian Military
Academy, and served as an officer in the armed forces. He holds a BS in Finance from Appalachian
State University, an MBA in Finance from East Carolina University, a Doctorate in Information
Systems from the University of Sarasota, and a Ph.D. in IT from the University of North Carolina.
Brandon Harwood is a BI Consultant for Comerit, specializing in SAP BusinessObjects design,
development, and implementation, as well as developing and delivering training on several report
development tools on various platforms. Brandon also has extensive experience leveraging SAP BW
on HANA and SAP HANA Live on many client projects
39
Information Management
Options in SAP HANA: Smart
Data Quality
by Don Loden
Data quality is always a challenge for organizations seeking a robust analytics solution. If
the data does not conform to high quality standards, then the analytical capabilities of the
solution are highly diminished.
Data quality is even more important for a real-time analytics solution based on SAP
S/4HANA. Take a recent experience I had with a company with a dashboard that is
powered by an SAP HANA ERP system. This analytics solution allowed the company to
have unprecedented access to real-time operational data. The business was very excited
about the new capabilities and the value that this would bring to the organization.
However, when the solution was demonstrated to the chief financial officer (CFO), all the
excitement around the new capabilities began to stall. The CFO had so much knowledge
of the business and history of the company that he could immediately tell that the
numbers in the dashboard were not possible. All development was halted until the data
quality issues could be remediated.
Problems like this are very real when speaking about the real-time data access that SAP
HANA provides. This is a problem that cannot be handled by legacy batch-based tools
as the solution would be in real time. Fortunately, SAP has a pretty unique solution in SAP
HANA to help with real-time data quality issues: SAP HANAs Smart Data Quality (SDQ) tool.
Smart Data Quality in SAP HANA allows a developer to combine functionality to fully
transform data that would normally be limited to SAP Data Services or other batch-based
extract, transform, and load (ETL) programs. It can perform those transformations in
real time as the records are created in a source system. Developers can provide dataquality enrichment to a person or firm/business, and address data literally as the data is
being created in the source system. Figure 1 shows a Smart Data Quality flowgraph that
performs operations to accomplish these transformations.
This example shows a source table, Z_USA_CUSTOMERS. This table contains both the
customer name and business names, as well as the associated address of the customer.
This is a typical layout for a variety of systems as well as a good starting structure for a
reporting dimension table. I show how to use the flowgraph that is constructed in Figure
1 to cleanse the customer and customer address information to enable greater reporting
capabilities when this customer data is used in reporting as a dimension.
40
41
After you right-click the donloden package, click New from the pop-up menu. Then click
Other . A new window appears where you can browse for the type of object you wish to
create. To do this, the easiest method is to start typing the word flow. This starts a search in
SAP HANA to produce the selection called Flowgraph Model. This searching and selection
is shown in Figure 3.
42
Figure 5: Filter node configuration and the Filter Expression: field location
if you were only licensed for United States address cleansing, it would make sense to filter
on a country field. Now, I examine the heart of the SDQ cleansing flow: the Cleanse node.
The Cleanse node is shown in detail in Figure 6.
Notice in the Cleanse node that there are three tabs: Input Fields, Output Fields, and
Settings. The Cleanse node is different from many other transformation objects (under
the palette on the right side of the screen) in that the developer has access to cleansed
43
data from an SAP system as well as various postal services around the world. The way this
node is used is that fields are mapped fields from input tables or source data and then you
select the output fields that you would like to be visible and output to the target table or
system. Table 1 describes the three tabs and their functions.
Tab Name
Tab Description
Input Fields
Fields for the source table or system that can be mapped to various input
fields for cleansing. These include address data as well as person and
firm (business names) data.
Output Fields
Settings
Default settings that are found here can be altered to suit many common
development tasks.
44
To configure the Cleanse node, you map the input fields into the Input Fields tab, as
shown in Figure 6. The fields that I mapped for this sample exercise are listed in Table 2.
Input field type
Mapping/table field
Address
ADDRESS
Address
CITY
Address
POSTALCODE
Firm
Firm
FIRM
45
As a review, these are the cleansed data elements that I return from the Cleanse node in
my SDQ flowgraph:
City
Region
Postcode
Address
Now that the data is cleansed and enriched, it is time to output the data to an SAP HANA
target table. This is performed by using a Data Sink from the General section of the tool
Palette on the right side of the screen in Figure 8. You drag and drop from the right side
Palette onto the middle white canvas to use it in the same you use other nodes and tools.
The section of the screen at the bottom dynamically changes based on what is selected.
46
To view the data in the new table after executing the flowgraph, you merely select the data
via SQL as you would in any other table in SAP HANA. Figure 9 shows the table I made in
this example.
47
Data Preparation
Because this is a prototype, its a good idea to reduce the complexity and noise that is
normally generated when you are working with a large volume of data. My experience
with such POCs is that development teams often forget the overarching goal for a POC
prove out the possibilities. In the case of this POC, I decided to limit it to 1,000 records
(e.g., 1,000 open [line] items). To make the visualization meaningful, I decided to level the
playing field by identifying line items with the same payment term. I settled for N15, which,
as the name suggests, means that the net due is within 15 days of the baseline date (and
there are no discounts for early payments).
48
page. Because SAP Lumira is a rapidly evolving product, the home page may look different
from the one shown in Figure 1.
49
Because the data resides in a Microsoft Excel spreadsheet, select the first option and click
Next. This action enables you to select a file from your local drive (Figure 3).
If the first row of your dataset consists of column names, keep the Set first row as
column names check box selected. Otherwise, this column is considered part of the
data and affects your visualization.
Give your dataset a meaningful name; you dont need to stick to the default.
After you select your fields (e.g., Customer ID, Amount, City, Local currency), click the
Create button (not shown) at the bottom of the screen in Figure 3. You are now in the
Prepare tab of SAP Lumira. For this prototype, there isnt anything I would recommend you
do on this view, so you can click another tab (e.g., Visualize).
50
and measures) from the left panel to your canvas on the right. Note also that SAP Lumira is
smart enough to recognize which of these fields are attributes and which are numeric (key
figures). This is shown in Figure 4.
51
Note!
A question I was asked by a few
clients the first time I showed
them the screen shown in Figure
5 is, Why do the field names
appear the way they do? The
answer may be obvious to a
technical person, but business
users may not know that SAP
Lumira inherits these standard
field names from the original
description from the spreadsheet
to which that data was
downloaded, and this data, in
turn, came from the standard SAP
table BSID. SAP Lumira enables
you to customize these names to
meet your specific needs.
52
53
visualization vehicle for this, I click the pie chart icon (Figure 6) and select the year and
period as Dimensions and the amounts to be the pie sectors. The graphic shown in
Figure 8 is generated.
54
55
Figure 11: Pie chart display of the top five amounts with a combination of dimensions
You can see the legend explaining the top five segments of the pie chart conveniently
placed on the right of the dashboard. When you mouse over your pie chart, you can see
the amounts, or if you do not mind the clutter, you can set the display in the settings to
Show Data Labels.
56
57
For those of you who do analysis using some type of software package, such a screen
looks very familiar and self-explanatory. For those of you who are new, here are steps you
need to carry out:
1. In the Dimension Name field, enter a name for your calculated dimension.
2. Scroll up or down the Functions panel to identify the functions you want to use. In my
example, you need the CurrentDate() function. Double-click it and it appears in the
Formula calculation panel.
3. Add the necessary operator using your keyboard. Click the OK button.
Your calculated dimension is now added to the dimensions list. You still need to complete
a couple of steps. Aging is not really a dimension, but a measure. However, you cannot
create a calculated measure directly off dimensions. Therefore, you have to perform a
workaround by first creating aging as a dimension and then making it a measure. Position
your cursor on this new calculated dimension and click the options icon to open the
context menu as shown in Figure 14.
58
After you click the Create a measure option, a clone of your calculated dimension is
created, but as a numeric entity or a measure (Figure 15).
Note!
One major advantage you have with a calculated measure is the ability to select the type
of aggregation. This is often a key component of analysis. Because calculated dimensions
are considered attributes, you cannot do any aggregation. Note also that SAP Lumira does
internal date conversions, so when you created the formula for aging, you really did not have
to worry about converting the posting date and current date to a similar format. In traditional
reporting (e.g., ABAP or SAP BW), a lot of time is expended by developers in converting from
one format to another.
You are now ready to use your new calculated measure for visualization and analysis. Do
not delete the original calculated dimension for agingif you remove it, you also lose the
calculated measure for aging.
59
The flexibility of SAP Lumira allows you to experiment and learn by trial and error. In a
more traditional and rigid application, such experiments cannot be done on the fly and
making changes would be time-consuming. For example, if you want to use aging as a
dimension instead as a measure, and display elapsed days exactly as they are, select Aging
(Days) from your list of dimensions instead of measures. This time, use a line chart and
select measures (Amount) and dimensions (Aging (Days) and Customer ID) as shown in
Figure 16.
You can see the aging range as well as the various trends. You also see at first glance that
there are a few customers that have invoices that have aged for a combined 1,550+ days.
You also see one big cumulative amount outstanding ($165,000) for customer 1976177 for a
cumulative 265 days. n
Anurag Barua is an independent SAP advisor. He has 23 years of experience in conceiving,
designing, managing, and implementing complex software solutions, including nearly 18 years
of experience with SAP applications. He has been associated with several SAP implementations
in various capacities. His core SAP competencies include FI and Controlling FI/CO, logistics, SAP
Business Warehouse (SAP BW), SAP BusinessObjects, Enterprise Performance Management, SAP
Solution Manager, Governance, Risk, and Compliance (GRC), and project management. He is a
frequent speaker at SAPinsider conferences and contributes to several publications. He holds a BS
in computer science and an MBA in finance. He is a PMI-certified PMP, a Certified Scrum Master
(CSM), and is ITIL V3F certified.
60
61
62
Because SAP Cloud for Analytics is delivered on the SAP HANA engine, calculation
capabilities allow for quick processing and the potential to manage large data volumes and
complex logic. Initial financial intelligence has been delivered with the application and more
will continue to be added over time. SAP has delivered allocation calculations, including
standard source-target allocations, cost pools, spreading, seasonality, and cell locking.
Finally, SAP Cloud for Analytics has been released with not only a standard web interface
but also a mobile app. Notifications, events, and collaboration features are all covered in
the initial release of the SAP Cloud for Analytics mobile app.
From a technical perspective, the key to understanding SAP Cloud for Analytics is that the
underlying database is SAP HANA. At base, this hosted solution provides companies with
the performance and scalability benefits of SAP HANA without requiring the cost, time,
and effort to acquire an SAP HANA appliance (if one does not exist in your organization).
As with any SaaS solution, the acquisition of data is always a question. Many of the legacy
cloud-based EPM solutions have been challenged with acquiring and formatting data
for their systems. SAP Cloud for Analytics excels in this area, having been delivered with
data connectors for multiple source types: flat file uploads, SAP Business Warehouse (SAP
BW) model extractors, BEx query connections, SAP HANA view connectors, and BPC
connectors. In the case of the BPC and SAP HANA connectors, the integrations can be
established for one-way or bidirectional movement of data. This is the very basis of the
hybrid BPC-SAP Cloud for Analytics solution.
63
Many existing BPC users are looking to maintain their existing planning models in
BPC for corporate planning to take advantage of BPC mature financial intelligence.
However, for detailed planning for revenue, bill of material costing, or headcount
planning and the like, SAP Cloud for Analytics allows for autonomous yet integrated
planning capabilities. These new departments/regions/companies can model forecasts
based on their unique details and bring the results back into the corporate planning
solution housed in BPC.
SAP Cloud for Analytics offers these features for technical users:
Global planning scenarios in which there are environmental issues with standardization
of Microsoft Excel versions or challenges with network bandwidth and performance
BPC, version for Microsoft users who want to leverage the performance and scalability
of SAP HANA can add SAP Cloud for Analytics and set up BPC-Microsoft as a source
connection.
BPC, version for NetWeaver users who also want the performance and scalability
of SAP HANA, but have chosen not to make the investment in the SAP HANA
infrastructure at present either due to cost or to concerns about the time and effort to
move in-place, mission-critical SAP BW/BI/BPC systems.
64
Your Model
Any Device
65
In the SAP Cloud for Analytics menu (the Cloud for Analytics navigation is driven from an
application menu found in the upper left side), select Connection (Figure 5). A list of valid
connections is provided. Click the + icon. Choose the Create Connection from BPC. This
action returns you to the interface in Figure 4.
The third step in the process is to define the specific integration. This bidirectional
connection manages the movement of data and master data between BPC and SAP Cloud
for Analytics. The connection works with SAP BPC 10.x, version for Microsoft and SAP BPC
10.x, version for NetWeaver.
From the SAP Cloud for Analytics menu, go to Modeler and then choose Import Data from
BPC. This action takes you to the screen shown in Figure 6. If you are creating a brandnew SAP Cloud for Analytics model from BPC, you instead choose Import Model from
BPC. In both cases, the new Connection window opens.
Select the Connection to the
BPC server that you just set up.
Pick an environment from the list
of supported environments on
that connected server. Select the
model to be extracted. In this
case, the model represents the
BPC application to be used. You
can align standard, embedded,
or Microsoft types to the BPC
source system.
66
67
flows can also be imported and converted to Events in the Cloud for Analytics calendar.
This import can be achieved either as part of the full import of an SAP BPC model, or on
an event-by-event basis. To manually pull over BPC Business Process Flows to Cloud for
Analytics events, click the import event icon (Figure 8) at the top of the Events interface.
68
Using this Connection interface, you can perform the following integration routines:
Create a model from BPC: A new SAP Cloud for Analytics model would be generated
from the existing BPC cube, with all appropriate dimensional mappings and filtering
applied. This is key to establishing a hybrid BPC-SAP Cloud for Analytics solution.
Click the Import button and select Import Model from BPC. Define the parameters for
the connection and click Create. To define mapping of BPC dimensions to Cloud for
Analytics perspectives, click the Edit Mapping button (Figure 10).
69
Load from BPC: Sets up the movement of BPC data to an existing Cloud for Analytics
model. Can be applied to either a model created from BPC, or a manually built model.
Select Modeler from the Cloud for Analytics menu (Figure 5).
Click the Import button in the upper right of the screen and select Import Data from
BPC. Define the parameters for the connection and click Create. To define mapping of
BPC dimensions to Cloud for Analytics perspectives, click the Edit Mapping button.
This interface is consistent with the Import Model from BPC interface.
Load to BPC: Follows an inverse process of moving data from Cloud for Analytics to
BPC. As with the load or create process, the user can filter which perspectives and
members of data will be moved back to BPC. Again as an example, you can choose
to send back to BPC the Scenario MyForecast, mapped to the BPC Category
dimension member Working, and just the data for the Organization EMEA (and all
its descendants), for the year 2016.
Select Modeler from the Cloud for Analytics menu (Figure 5).
Click the Export button in the upper right of the screen and select Export Data
to BPC. Define the parameters for the connection and click Create. To define
mapping of BPC dimensions to Cloud for Analytics perspectives, click the Edit
Mapping button.
This interface is also consistent with the Import Model from BPC interface.
Other integrations: While the hybrid approach to integrating BPC and SAP Cloud
for Analytics is the focus of our article, Cloud for Analytics has the functionality to
set up additional integration types (depending on the Connector type) such as SAP
HANA views, SAP BW objects, BEx queries, and flat files.
All integrations can be scheduled to run at regular intervals. If you navigate to Connection
> Import Status, you can see where to define scheduling parameters (Figure 12). Import
or export of data for SAP Cloud for Analytics can be run on an ad hoc (run now) basis or be
scheduled for recurrence. Recurrence is managed through the SAP HANA engine and can
be set for Daily, Weekly, or Monthly with full granularity of definition (for example, every
Monday at 2:00 a.m.). EST starting 10/1/2015 and ending on 12/31/2016.
70
Also of note to people interested in a BPC-SAP Cloud for Analytics hybrid solution is
Cloud for Analytics backup and recovery capabilities. By navigating via the menu to
Deployment | Export, users can back up a model to be moved between environments,
including between SAP HANA environments. The backup/restore process moves the
entire SAP Cloud for Analytics model or models and all associated objects including
Perspectives, Members, Events, Roles, KPI calculations, and Reports. n
Paul Davis is a vice president at VantagePoint Business Solutions.
Graylin Johnson is director of enterprise financial analytics and enterprise performance
management (EPM) at Tory Burch. He is an expert in the SAP Business Planning and Consolidation
(SAP BPC) solution and has led several successful implementations for an array of industries. He
holds a BS in finance and has more than five years of experience in financial and business analysis.
Currently, Graylin is focusing on EPM innovation by developing new use cases for SAP BPC as well
as leveraging predictive analytics to increase forecasting accuracy and automation.
71