Vous êtes sur la page 1sur 86

Date Book Unit and Topic Status

1/29/2018 HA300 Unit 8 - Optimization Done


1/30/2018 HA300 Unit 2 - Modelling Functions Done
1/31/2018 HA300 Unit 2 - Modelling Functions Done
2/1/2018 HA100 Unit 4 - Data Provisioning Done
2/1/2018 HA300 Unit 4 - HANA Live Done
2/2/2018 HA100 Unit 2 - HANA Architecture Done
2/3/2018 Practicals and Questions
2/4/2018 Practicals and Questions
2/5/2018 HA300 Unit 3 - SQL and SQLScript Done
2/6/2018 HA300
2/7/2018
2/8/2018 HA300 Unit 5 - Add. Modelling Func Done
2/9/2018 HA300 Unit 6 - Mgt & Admin of Models
2/10/2018 Practicals and Questions
2/11/2018 Practicals and Questions
2/12/2018 HA300 Unit 7 - Security Done
2/13/2018 HA100 Unit 4 - Data Provisioning
Unit 2 - HANA Architecture Done

2/14/2018 HA100 Unit 1 - Intro to HANA


Unit 3 - Mod & Data Processin in HANA Done
2/15/2018 HA100 Unit 5 - Running Apps on HANA
Unit 6 - Continue ur learning journey Done
2/15/2018 HA100 Unit 1 - Information Views Done
2/16/2018
2/17/2018 HA300 All Units
2/18/2018 HA100 All Units & Questions Methodologies
2/19/2018 Togaf certification
2/20/2018
2/21/2018 Questions and Answers
2/22/2018 Revision Solutions
2/23/2018 EXAM
Togaf certification
Unit 1 - Information Views

Lesson 1 - Introducing Information Views


Key vocabulary of modeling
* Measure
* Attribute
* Dimension
* Star Schema
* Hierarchy
* Semantics

Attributes are used to filter or aggregate measure

Measure Attribute
Definition A numeric value Used to describe a measure

Examples No. of products sold Product ID


Unit Price Product Name
Total Price Customer ID
Customer Name
Currency
Sales Org/Sorg Rgn/Sorg Ctry

Notes:
Attributes are used to filter or aggregate measures

Key objective of modeling is to create a relevant assoication between attributes


and measures

Dimension
To analyse measures by grouping attributes together by dimension

e.g

Dimension 1 Dimension 2
Product Sales Org.
Product Key Sales Org Key
Product Name Sales Org Name
Product Category Country
Supplier Region

Star Schema
Consists of one fact table that references one or more dimesnion tables
Fact table contains facts or measures as well as keys used to relate to the dimension tables

Hierarchy
Is a structured representation
Used to navigate the entire set of members with more ease
Info Views in HANA
* Used to create a Virtual Data Model

Benefits of Info Views


* All calc are peformed on the fly within the dbase engines
* Reusability - One info view can be referenced by another view
* Flexibility - Can use filters, parameters, hierarchies, etc
* Adaptability - View can adapt its behaviour to the list of columns that are selected for o/p
* Easy to transport

Design time IV Run time IV


* Create or modify an info. View * Activation of design time object creates
run time objects
* Is located in HANA package
content * Is located in the deidcated schema, _SYS_BIC
* Auth to access them can be defined * Activation of one design time obj can create
at package level several run time objects

Information View Types

Graphical Calculation Views


Dimension Calculation Views
* Does not handle measures
* Any column is always considered as an attribute
* Aggregation is interpreted as a select distinct

Cube Calculation Views


* Includes measures
* Enables multidimensional reporting

Supported data source types in graphical calc. views


1. Row and column tables from the same dbase
2. Row and column tables from a diff dbase
3. Core Data Service (CDS) entities
4. SQL Views
5. SQL view from another dbase of the same HANA system
6. Graphical calc. views (only dimension calc. view)
7. Calc. view from another dbase of the same HANA system
8. Table functions
9. Decision Tables
10. Virtual tables
11. Analytical view (deprecated)
12. Script based calc. view (deprecated)

Scripted Calculation Views and Graphical Calculation Views embedding an


SQLScriptTable Function are not supported

How to identify if a table is row or column oriented?


a) From the systems view catalog
> Check the table icon
> Open the table definition

b) Within a node in an info view that consumes the table


> Check the table icon
> Check the Properties view

c) From the SQL console


> Query the system tables M_Tables

Script Based Calc Views


* Useful to apply complex business logic
* Is now replaced by table functions
Calc. Views and HANA engines
* Index server provides several engines to process queries against calc. views

HANA Engines to execute calc. views


* Join engine
* OLAP engine
* Calculation Engine

Overview of calc. view optimization


2 step optimization
1. Single SQL stmt is generated across stacked model and this stmt is passed to SQL optimizer
2. SQL Optimizer adds additional optimizations and delegates the ops to the best
dbase execution operator

Querying Calc. Views in HANA Studio


1. Standard Data Preview
> Raw Data - All data
> Distinct Values - All attributes
> Anaylsis - Selected attributes and measures in tables or graphs

2. Custom SQL Query

Lesson 2 - Connecting Tables

Classical Joins HANA Specific Joins


* Inner Join * Referential Join
* Left Outer Join * Temporal Join
* Right Outer Join * Text Join
* Full Outer Join * Spatial Join
* Star Join
Inner Join
* Basic join type
* Returns rows when there is at least one match in both sides of the join

Left Outer Join


* Returns all rows from the left table, even if there are no matches in the right table

Right Outer Join


* Returns all rows from the right table, even if there are no matches in the left table

Full Outer Join


* Combines both left and right outer joins
Result set is composed of:
> Rows from both tables that are matching on joined columns
> Result from left table with no match in the right table
> Result from right table with no match in the left table

Note:
* Full outer join can be used only in calc view in standard join and star join nodes
* In star join node, full outer join can be defined only on one dimension
calc. view and this view must appear last in the star join node

Referential Join
* Semantically an inner join
* It assumes that the left table always has a matching entry in right table
* It is an optimized or faster inner join
* It does not check the right table if no fields are requested from the right table
Note:
Since SPS11, referential join is supported in any types of joins in calc. views,
earlier it can be used only in Star Join Node

Text Join
* Used to join a text table to a master data table
* Acts as a left outer join and can be used with SAP tables where the language
column
(SPRAS) is present
* Enables HANA to handle the translation of attribute labels
* Depends on the session language of the user

Temporal Join
* Is to find matching records from two tables, based on a date
* Useful to manage time dependent attributes
* Temporal conditions can be added to the following join types and view types
Info View Node Join Type
Join Node
Calculation View Inner Join
Star Join Node
Inner Join
Analytical View Star Join Node
Referential Join

* Temporal conditions can defined on columns of following data types:


> Timestamp
> Date
> Integer

Star Join
* Is a node type rather than a join type
* Used to structure data in star schema
* Fact table/data source can be any type of input node
* Only dim calc. views are allowed as input nodes for dimensions
* Type of join between fact and dimension tables within a star join node
can be defined in the star join node
* The available joins are as follows;
> Referential join
> Inner Join
> Left Outer Join
> Right Outer Join
> Full Outer Join
> Text Join

Spatial Join
* Enables modelers to use the native spatial capabilities of HANA in grpahical modeling tool

Join Cardinality
* Defines how two tables joined together are related, in terms of matching rows

Union Vs Join
* Avoid joins as they are costly from computing standpoint
* Union node is preferred and provides much better performance

Lesson 3 - Create Dimension Calculation Views

Dimension Calc. Views


* Used to give context
* used to select subset of master data rows and columns
* Are reusable objects

Shared Dimension Calc. Views


* One dimension calc. view can be used in one or more calc. view of type cube
Supported types of nodes
* Supports all types of nodes except star join node
* Mostly used nodes are join and projection

Calculated Attributes
* Calculations can be an arithmetic or just a character string manipulation
* Calculated columns support non-measure attributes as part of the calculation
* Nested calculated column is possible (one calc. column is based on another calc column)

Time-Based Dimension Calculation Views


* Exclusively to time data

Gregorian Fiscal
* Made up of years, days, months * Organized into fiscal years and periods
* Adjust granularity level to
hours., mins and secs * Can define several fiscal varaints
* M_TIME_DIMENSION_* tables * M_FISCAL_CALENDAR
* Located in _SYS_BI schema * Located in _SYS_BI schema

Base Table Alias


* When adding multiple instances of the same table to a view an alias will be proposed.
* It is possible to modify the alias name in the table properties
* SAP HANA studio automatically suggests an alias, but you can choose your own.

Hidden Columns
* Hide columns in Semantics node
* Hidden columns will not be exposed to client tools
* Only a column that is added to the output of the top node of a
calculation view can be defined as a Label Column

Lesson 4 - Using Measures in Calc. Views

Measures in Calculation Views


2 graphical modelings that handle measures
* Calc view of type cube
* Calc view of type cube with star join

Possible Nodes Types


Projection Node
* To filter measures based on attribute values
* To extract some columns from any data source * How to apply filter on a proj node?
* Calculation before aggregation is not offered in Calc view

Join Node Start from page 81


* Two join only two data sources
* Use stacked join nodes to join more than two data sources
* When you use join, you have to mention the join type

Spatial Join Type


* Supported in Join node
* Used in graphical calc. views
* Scripted based calc. views and table functions are avoided in
some graphical spatial modelling

Dynamic Join
* For more complex scenarios where you want to present two diff measures side-by-side
* A Dynamic Join can be defined only with multi-column joins

Benefits of a dynamic join


* Only the Join columns requested in the query are brought into context
and play a part in the Join execution.
* A Dynamic Join can be defined only with multi-column joins
* With a dynamic join, if none of the joined columns are requested by the client
query, you get a query run-time error.

Star Join Node


* Enables to join dimensions with fact table
* Inputs allowed to star join node; all lower node and CV of type dimension
* Logically creates a star schema
* Supports referential join type

Union Node
* If you want to combine multiple result sets with identical structures into one
result set, you can use a union node
* Mapping of source to target is required and will allow you to adapt structural differences
* A Union node is used to combine two or more data sets to a common set of columns.

Standard Union Union with Constant Values


* Present measures in single column
and diff them using attributes * Diff measures in diff columns
Use "Manage Mappings" to set a constant value

Union Nodethe
* Optimize Pruning
execution of union node by specifying condition in the dedicated
table which is called as Pruning Configuration Table

Aggregation Node
* Have further control on aggregations in graphical calc. views
* Calculated columns can bee assigned to aggrgation nodes
* These calculations happens after aggregation
* Used to apply aggregate functions on to measures based on one or more attributes
* Is simillar to Group By Clause
* Supports MIN, MAX, SUM , COUNT, Average, Variance and Std Deviation
* Default aggregation is SUM

Attribute Vs. Aggregated Column


* Add column as Add to output, no aggregation is performed
* In case if you Add as Aggregated column, then an aggregation function is to
be specifed, by default the function is SUM

Since SPS12, we can convert an output column into aggregated column


directly in the output pane

Controlling the Behavior of the Aggregation Node


To build more flexible models, use
* Keep Flag
* Transparent Filter

Keep Flag:
* Setting the Keep Flag property to true for an attribute forces the calculation
to be triggered at the relevant level of granularity

Transparent Filter
* Setting the column to Transparent Filter as True will elimnate the column in Grou By clause

This flag is required in the following situations:


• When using stacked views where the lower views to have distinct count measures.
• When queries executed on the upper Calculation View contain filters on columns that are
not projected.
Rank Node
* To return the Top or bottom n values

Main settings of a rank node are as follows;


Sort Direction
Threshold
Order By
Partition
Dynmaic Partition Elements
Generate Rank Column

Rank Column in rank node

Attribute Measure
* if removed just provides the sum * Offers more flexibility

Enhance Flexibility in CV
* Switching node types without loosing link to the data source/lower and upper nodes
* Replacing a Data Source without losing the output columns, calc. columns
* Insert an additional calculation node (or remove an existing one) without losing
the link between columns end-to-end.
* Extract Semantics -
* Propagate Semantics
* Preview O/P of an intermediate node
* Map I/P parameters b/w nodes

Caching View Results


* Cache the results of a calc. view
* Cache view at Dbase level and at indivudal view level

General Settings of Cache


* To be turned on by Admin
* Key options; min duration of a cache
* Possible to define a size limit of a cache

Specific Cache Settings at the View Level


Hourly, Daily and Transactional

Data Lineage
* Enables you to track the origin of a column in a calculation scenario
* Identify the column that has been renamed

Lesson 5 - Understanding Deprecated Graphical Views

Use of attribute and analytical views are no longer recommended


* Uses cases that are available in deprecated views but not in calc. view

Derived Attribute Views


* Derive an attribute view so as to use it more than once in the
star join of a single Analytic View
* Derived view acts as a ref to the base attribute view

Creating Derived attribute views


* They open in read only mode
* The only field that can be modified is its description
* Derived AT_View is identical to its base AT_View

Scenario in AT View and AN View


* AT and AN views support ony simple calculation scenarios

Attribute Views
* Data foundation in AT_view can contain one or several tables
* Joins between these tables are defined within the data foundation
* Data foundation can join more than 2 tables if need be
* Can define calc. attribute columns in data foundation

Analytic Views
* Data foundation can have one or several tables but measures can originate
from only one table

Functional Limitations in Data Foundation of Analytic Views


1. Limited types of Data Source
* Main source type is column tables
2. Facts originate from one table

Calculated Columns in Analytic View


* Calculated columns in AN_View are defined in Star Join Node
* Calculated columns are by default calculated after aggregation
* Use option Ënable Calculation before Aggregation" to calculate before aggregation

Join Behaviour
Fact table
Data Preview preference screen

Left Table - Fact Table Right Table - Dim Table

Sales Order Customer


S.No ORDERJD AMOUNT S.No C_ID STATE AGE
1 1 100 1 1 MI 10
2 2 100 2 2 MI 11
3 3 100 3 3 TX 12
4 4 100 4 4 TX 13
5 8 100

State
S.No STATE SNAME
1 MI MICHIGAN
2 AL ALABAMA
Sales Order Value
1 1000
2 200
3 300
4 109
5 18
6 10
7 12

Table 2
Region Product Value Region Total Sales
TN HT1000 1000 TN 3000
TN HT2000 2000 AP 5000
AP HT1000 2000
AP HT2000 3000

Tabi 1 Tab 2 op

No name age no name sex


1 aa 12 1 aa m
2 bb 10 2 bb f
3 cc 13 3 dd m
Star Join is always deployed with an aggregation node on top of it.
The Star Join is deployed first with a series of joins, and then the aggregation logic is
deployed. This allows the view to aggregate the measures dynamically, based on the attribute
columns you select to include in the result set.
Practical - Union node in HANA Studio?

How to differentiate the union as standard union or union with constant?

Scenario needs to be checked in system


Modeling Functions

Hierarchies
Level Hierarchies Plain hierarchy Heterogeneous fields With diff data types
* A level hierarchy requires each node in a separate column
** Heterogeneous
Is rigid in nature,fields
and the root and child nodes can only be accessed in a defined order.

Parent-Child Hierarchies Distinct Fields, Primary Key


* A parent child hierarchy requires separate columns for parents and children
* Parent-child hierarchy columns usually contain IDs or key fields instead of plain instead of text
* Distinct fields define the parent-child relation
* Parent and child fields usually have the same data type
* Recursive data structure defines the hierarchy

Time Based Hierarchy

Year
Quarter
Month
Week
Days

Hierarchies are created in Semantic Node


Hierarchies are not visible in Studio as like in client tools that support MDX

Node Styles - used to define the output format of a node id


Level Type - Specifies the semantic for the level attributes
Regular level type indicates the level does not require any speical formatting
Hierarchy Member order is to sort

Parent-Child Hierarchy
1 step - To define the nodes making up the hierarchy

Child Column - Child attriubute


Parent Column - Parent attribute

Multiple parents and compound parent-child definitions are not supported by MDX

Aggregate All Nodes * Is interpreted only by HANA MDX engine


* Defines if intermediate nodes should be added to the root node
* If there is no requirement for aggreggation then set it to FALSE so the execution is faster

* Default Member identifies the default member of the hierarchy


* If not mentioned all are default members
Orphan Nodes
* Nodes without parent
* Orphan nodes will be defined as hierarchy root node
* On encountering orphan node, view will throw an error
* Orphan nodes will be ignored
* Orphan nodes will fall under stepparent node ID as defined on the node tab

Root Node Visibility


Use this option to define addiiotnal root node

Cycles
Not preferred

Time dependent hierarchices


For more complex hierarchy data
Info is reliant on time
Applicable only to calc view and for hierarchies of parent-child

* "Enabling Time Depndency" check box supports Valid From and Valid To
* Source data should contain valid from and valid to columns

Drill Down Enablement


* By default MDX shows only key fields
* If we set the "‘Drill Down with flat hierarchy (MDX)" then even a non-key field
can be used for reporting using MDX

Calc View of type Dimension


Hierarchy

Calc View of type Cube with Shared


Star Join Hierarchy
Hierarchy is available as
Shared Hierarchy

Use SQL to query this shared hierarchy column

Enable for SQL Access in Calc View of type cube with star join
Option1 Enable for all shared hierarchies
Option2 Enable for the specified shared hierarchies
BI Client Tools that support Hierarchies

Lesson 2 - Creating Restricted and Calculated Columns

Restricted Columns
* Restrict to a subset of a column
* The restricted column is restricted based on one or more attributes
* The restriction cannot be based on a column defined as a Measure in the semantic
* A restricted column can be created in an aggregation node of a calculation view.

Setting the Restriction


* Restirctions for a restricted column can be applied to multiple columns Need to check Restricted C
* Restricion on same columns using OR
* Restricion on different columns using AND

The SQL language can be used in the following expressions:


* Restricted Column
* Filters
* Calculated Columns
* Variables & Input Parameters

Calculated Columns
* You can define a calculated column as measure or attribute
* Ensure that you create the calculated column in a node prior to where aggregation
is performed.
* Minimize calculations before aggregation

Storing Value of a Calculated Column


* Create an ad-hoc SQL artifcat that updates and inserts calc. col into a table
> Use a write enable procedure or a table function
* Use ETL tool (SLT or DS) --> pre calculate columns during data provisioning phase

Calculated Columns in Analytic View


* In analytic View, calculation is performed in Star Join node not in Data Foundation
* Has an additional option to "Enable Calculation before aggregation"
Explicit Language Def in Calc Column;
* SQL - Uses SQL
* Column Engine - Uses SQLScript

Plain SQL is better optimized than SQLScript

Lesson 3 - Filtering Data

Using Filter Options


* Reduce data transfers b/w engines by
> Creating Filters
> Querying using a WHERE clause
> Creating client dependent views
> Restricting data when modeling using domain fix values

Filter A WHERE Clause


* Defined design time on a table * Defined on runtime in the SQL query
* Applies to data before query
starts to execute * Is applied on the results set of a query
* Is faster than WHERE clause * Is slower than filters

Filters
* If client id is not mentioned in the user id then in calc view, no data is retrieved
* If client id is not mentioned in the user id then in analytic view, data from
all client is retrieved (cross client)
* $$CLIENT$$, system variable picks up the client value stored in user's id

Session Dependent Functions


* Domain Fix Values - Considered for ECC
* Domain fix value tables are located in the same schema as the replicated tables.

SESSION_CONTEXT('CLIENT') - Returns client value based on current user profile


SESSION_CONTEXT('LOCALE') - Returns Session's language in POSIX format
SESSION_CONTEXT('LOCALE_SAP') - Returns Session's language following SAP internal format

Domain Fix Values


* Data concepts in ECC

Modelling using domain fix values


* DFV can be used for several fields
* Tables DD07L, DD07T
* Domain fix value tables are located in the same schema as the replicated tables

Lesson 4 - Using Variables and Input Parameters


Variable - WHERE Clause
Input Paramter - PLACEHOLDER

Variables and Input Parameters


* To create more dynamic info. models
* Variable - Bound to attributes and are used for filtering.
* Input Parameters - Used in formulas

Variable
* When using variable, a WHERE clause is added to the query
* Used to filter data at runtime
* Input either manually or from drop-down list
Single
Interval
Range

Input parameters
* Input parameters are compulsory for evaluation of an info model
* Reserve word PLACEHOLDER is used in SQL

Creating Variable

1. Identifying the attribute in View/Table


2. Mention the attribute Selection Type (Interval, Range, single Value)
3. Multiple entries
4. Default value can be mentioned and/or entry at runtime

Variables
* Attribute selection is important for performance
* Variable can be used in expression of calculated columns, only single value
cannot be used in the expression
* Variables are created in Semantic nodes
* One variable can be assigned to multiple attributes

Input Parameters
* Input parameters to define internal parameterization of the view
* Direct Currency/UoM, reference tables are required
* Hierarchies can be passed as input variable

Note:
* Expression of calculated columns and expression of filters in projections
requires multi-value input
* Use Ïnput Enabled option"when passing input parameters of type
scalar function/procedure so that the user can modify the input

* Variables and input parameters support multiple default values since SPS10
* Input parameters can be called in calculated columns

Lesson 5 - Implementing Currency Conversion

Currency Conversion
* Possible both in graphical and Script
* SQLScript The function for currency conversion in SQLSCript is CONVERT_CURRENCY,
this based on column engine plan operators (CE functions)

* Currency conversion is asscoiated with TIME


* Currency tables of ERP should be available in HANA to enable currency conversion

* Since SPS12, currency conversion can be applied in any aggregation node of a calc. view

TCUR Schema

Steps for currency conversion


1. Assign the semantic type amount with currency to measure
2. Enable for conversion
3. Select Target currency
4. Define source currency
5. Select conversion date and exchange rate type

Ways to assign Semantic Type to a measure


1. In the output pane of an aggregation node
2. Can be done in star join node of a cube with star join node
3. If the conversion is set at the uppermost node of a calc. view then the semantics
can be assigned in the semantic node

The Result currency column is never exposed to client tools. It is only available to
other SAP HANA views, where it can be used in additional calculations.

Decimal Shift and Rounding


Decimal shift back is necessary if the results of the calculation views are interpreted in ABAP.

Lesson 6 - Creating Decision Tables


Decision Tables
1. Drives business agility
2. Offers transparency to biz users

Steps to create & consume Decision Tables


1. Create a new decision table
2. Edit the data foundation
3. Edit business rules (Actions & Conditions)
4. Consume the decision table

Represent decision table graphically in HANA Studio

Nodes in a Decision Table

Decision Table

1. Condtions
2. Actions

Data Foundation

1. Tables
2. Info Models
3. Table Types

Vocabulary contains the building blocks of a decision table

Parameters
Attributes
Calc. Attributes

Purpose of a decision table


To update a table
Simulation/Analytic purpose

Run time object of a Decision table is in _SYS_BIC (Schema) --> Procedures (Folder)
Condtions in the decision table are passed as input parameters to the procedure call

Create a decision table


1. Create a DT in a package
2. Define vocabulary in DF node
3. Define biz rule in decision table node
4. Consume the DT

From SPS10 onwards, a Decision Table can be consumed in a Calculation View by referring to
the design-time object.

* In the data foundation of decision table, we can add the following data sources
> Dbase tables
> Analytic Views
> Cacl. Views
> Attribute Views
> Table Types
Need to check Restricted Column in system
Introducing SAP HANA SQL

Structured Query Language (SQL)


* Communication language for relational dbase
* Used to store, retrieve, manipluate info in dbase

Definition and Terminology

* SQL descriptive or declarative language


* SQLScript is an extension of std SQL
* SQLScript can be used in 5 diff HANA objects. They are
> Stored Procedures
> Database Triggers
> Table Functions
> Dynamic Analytic Privileges
> Functions

SQL Console
* Used to type SQL statements and execute them
* Using SQL console is not recommended as;
> The statements are not saved anywhere, so we have to type statements in each system
> Requires auth. To execute stmts in Production or Test systems (Security Issues)
* Recommended is to use CDS
* CDS - No rework as codes can be ported from one system to another and no security issues
* SQL Console can be opened from system view in HANA Studio
* Has auto-completion feature (Ctrl+Space)
Notes:
-- is for commenting a single line
/* */ is to comment a group of lines
SQL Language Elements

Identifiers

Data Types:
* Used to specfiy the characteristics of data stored in Dbase.
Note: Null value indicates the absence of a value

Predicates
* Usually used in WHERE clause of a select statement
* Is specified by combining one or more expressions or logical operators and returns
one of the following logic or truth values;
> True
> False
> Unknown

LIKE Predicate
* Used for string comparisions
* Wildcard characters % or _ may be used in comparision
* % sign matches zero or more chars, _ sign matches exctly one char
* % or _ sign in LIKE predicate must use an escape character

Operators
* Use operators to perform operations in an expression
* Used for calculations, value comparision or to assign values

Functions
* Provides a reusable method to evaluate expressions and return info from dbase
Data type conversion functions
* Used to convert arguments from one data type to another
Number Functions
* Takes numeric values or strings with numeric chars as inputs and returns numeric values
* When strings with numeric chars are given as inputs, implicit conversion from
string to number is performed automatically before computing result value
Expressions
* Is a clause that can be evaluated to return values
Expression Type Description
Case Expressions IF ... THEN ... else logic without using procedures in SQL stmts
Function Expressions SQL built-in functions can be used as an expression
Aggregate Expressions
Uses an aggregate function to calculate a single value from the
values of multiple rows in a column (COUNT, MIN/MAX)
Subqueries in expressions SELECT statement enclosed in parentheses
Creating Tables

History Column Table:


* Allows HANA to store all previous values of a record
* Allows to travel in time

Global Temporary Local Temporary


* Visible in Catalog * Not visible in catalog
* Table definition is visible globally * Table definition and data is visible only to current session
* Data is visible only to current session * Begin with # symbol
* Table is dropped @ session end * Table is dropped @ session end

Syntax Elements
1. Table Type
2. Column Constraint
3. Table Elements
4. Column Definition
5. Data Type

Note: FOR UPDATE keyword locks the selected rows until end of transaction

Lesson 2 - Working with SQL Script

Introducing SQL Script


* Exposes in-memory features in HANA to developers
* Use variables to break a large complex SQL statement into smaller, simpler statements.
* Table variable prepended with colon (:) is used as an input variable
* All output variable have just name (no colon)

Note:
* Union returns distinct values
* Union All returns duplicate values as well

Data Types

Imperative Logic
* Performance is not as same as declarative logic

EXEC Statement
* To create dynamic SQL statement
* Create SQL stmt at runtime in a procedure
* Dynamic SQL stmt;
> Opportunities for optimizations are limited
> Stmt is recomplied everytime the stmt is executed
> Cannot use SQLScript variables in SQL stmt
> Not possible to bind the result of a SQL stmt to SQLScript variable
> Injection bugs harms secuirty or integrity of dbase
> Recommended to avoid dynamic SQL
> Datasource is changed at rumtime
Securing Dynamic SQL
* Uses 3 built-in procedures;
> Single Escape Quotes
> Double Escape Quotes
> IS SQL Injection Safe
User Defined Functions (UDF)
2 ways to create
1. Create stmt in SQL Editor - pure run time object, no transport, not use in graphical calc view
2. Create design time object in package - Recommended, CDS doc with extension .hdbfunction
(scalar) and .hdbtablefunction (table)

Table Scalar
.hdbtablefunction .hdbfunction

Table Functions
Table Functions can be created only in Developer perspective
Table User Defined Functions allow an improved movement of filters from client tools down to
the source data.
* Design-time table functions are created in HANA devloper Perspective

User - _SYS_REPO - owns the run time objects


Stored in the Schema - _SYS_BIC
Scripted calc view migrates to table function
Read-only functions
Any DML/DCL stmnts in Table functions will not allow the table function to be activated
Are used as data sources to Calc Views

Read Hierarchies via SQL


* Shared hierarchy can be enabled for SQL access only in graphical calc. view of cube
with star join
* This can be enabled in the properties of Semantic node

Aggregation Values in a hierarchy via SQL


* The node column can be used in the GROUB BY clause
* The result shows the aggregated amount for each node of the hierarchy

Lesson 3 - Procedures
* Reusable processing block - Procedure

Views - Single result set


UDF - Multiple single values but only scalar data types. They cannot return result set
Table Functions - Single result set

Procedures - Multiple input and multiple output/Can be of both scalar and table type
Procedure can call other procedure
Read only procedures can call read only procedures

Advantages of Procedures written in SQLScript


1. Calc. and transformations described in a procedure can be parameterized and
reused inside other procedures
2. Use and express knowledge about relationship in data
3. Related computations can share common sub expressions
4. Related results can be returned using multiple output parameters
5. Easy to define common sub expressions
6. Improves the readability of the code

* Procedure can be written using SQLScript or R or L language


* Statements in a procedure are separated using a semicolon(;)
* Language L cannot be used by customers only SAP to develop AFL

Analytic Privileges - Data auth access for info views


Procedure is used to define dynamic analytic privileges
Default language to create procedure is SQLSCript

Security Mode
Definer Invoker
Privlieges of _SYS_REPO user Privileges of the caller of the function

Note:
Analytic privileges are checked regardless of security mode

Reads SQL Data


* Defines a procedure as read only
* Neither DDL nor DML stmts are allowed
* These procedures can call only other read-only procedures

With Result View


* Can be specified for read only procedures
* Used to specify a view that is used as the ouptut of the procedure

Procedures
* Each output variable of the procedure is represented as a result set.

Calling a Procedure:
* Procedure or table function can be called by a client on the outer most level, using any supported client interfaces

Call…With Overview procedure


* Returns one result set that holds the info of which table contains the result of a particular table's output variable
* Used to populate existing table by passing it as a parameter
* Output parameter ? - temp table holding result sets will be generated

Procedure can be called by a calc view


Prcoedures are created in modeler perspective
These procedures are located in the _SYS_BIC schema, under Procedures.
Procedures created with a CDS document, with extension .hdbprocedure

Specifying Table Types for IN and OUT parameters


* When a procedure uses a tabular input or output, the correpsonding table types must be
created so as to activate the procedure
* 2 options to define table types
Externally a global table type Table Type Inline
* Table type is activated separately * Table type is activated along with
from the procedure the procedure

From HANA
artifact or weTutorial
create aBlog
reusable artifact containing this piece of code and call it wherever
needed.

INOKER/DEFINER in procedures?
There are 2 security modes:
INVOKER: the function executes with the privileges of the invoker of the procedure.
DEFINER:
It’s alwaysthe
bestfunction
to leaveexecutes with the
it as INVOKER privileges
so that of thethis
whenever definer of the
function procedure.
is called,
it will run based on the invoking user’s privileges.

Main features of debug tools in HANA Studio


* Debug both catalog and repository procedures
* Set breakpoints
* Display value of scalar and table variables at any breakpoint
* Debug sub-procedures called by the main one on-the-fly
* Store several debug config for a same procedure & apply them during a new debug session

Debug Perspective
* Includes a number of views to facilitate debugging
> Script Editor - set breakpoints, monitor status after execution. Edit, save the
script of your porcedure
> Debug View - Lists the procedures opened in debug session and their status
Note: You can suspend, resume, terminate and relaunch the debug thread
> Variables View - Display values of scalar and table variable types at each breakpoint
> Breakpoints View - Lists all breakpoints created. Allows to select/deselect
them before execution

Notes:
* Set breakpoints in SQL Script Editor View
* To add (or remove) a breakpoint, double-click the light-blue vertical bar on
the left of the SQLScript editor.

Required Auth. To Debug a Procedure


• Object Privilege EXECUTE on the DEBUG (SYS) procedure.
• Object Privilege DEBUG on the procedure, or on the schema containing it.

Debugging for Another User’s Session

* The debugger needs to be granted the ATTACH DEBUGGER privilege by the owner.
The external debugger will see the same data as owner without being granted additional
system or analytic privileges
Lesson 5 - Deprecated SQL Modeling Objects

* Scripted calc. views were used for the following purpose


> An alternative to Graphical Calc. views
> Execution of heavy data computations in calc. engine

HANA - Older Version HANA - Latest Version


Create run time objects Create design time objects
Create Procedure Create a .hdbprocedure

Use SQLScript CE functions Use SQL syntax

Scripted Calc View Table functions

Create Table Type Create Type

Create a procedure file Create a .hdbprocedure file

* Create the procedures in HANA development perspective


* Create desgin time objects like CDS view, .hdbprocedure and .hdbtablefunctions
* CE functions are no longer used
* Create min. SQL or SQLScript and bring back to graphical environment
* Migrate old scripted calc. views using migration wizard
to current session
* use "Enable SQL Checkbox" in sematic node
orted client interfaces

ble's output variable


Use syntax CREATE TYPE to create a table type before activating a procedure
that uses a tabbular input ot output
Virtual Data Models in SAP HANA

HANA Live
* Is for business suite
* Predefined data models based on SAP business suite tables
* Is of VDM
* Is not an RDS
* Is not an accelarator
* Has its own release cycle
* Focus on live operational reporting on business suite data
* HANA LIVE - Models
* RDS for HANA LIVE - BI reporting
* Fiori business applications based on the supplied VDM

* SAP BW is the recommended solution for Scalable Enterprise Data Warehouse (EDW)
Two Deployment options
1. Side by side scenario (side car) 2. Integrated Scenario (Suite on HANA)
* Requires data loading tool
* SLT is recommended

* Whatever is the scenario, Schema Mapping is required


* If schema mapping is not done, HANA Live View cannot be activated

Tool used to explore HANA live is HANA Live Browser


Benefits
* Combing of OLTP and OLAP
* Improved analytics based on real time
* Speed to deployment

Lesson 2 - Understanding VDM VDM


Virtual Data Model * Layered approach
* Extensible by customer
Query Views: Designed for direct consumption,
Always on top in the hierarchy, Few variables for
user filter, customer to create their own queries,
cannot be reused in other views

Reuse Views: Heart of VDM, Structuered, structured


and comprehensive, Can be reused by other views
but cannot be consumed directly by tools

Private Views: Built on other private views or on


dbase tables, shud not consume directly. Maintained
by SAP

Value Help Views: provides list of possible values,


cannot be consumed by VDM models but can be
consumed by applications directly

Physical Tables QV --> RV --> PV --> Tables


Physical Tables

SAP HANA Live


* Only graphical calc views
* Uses only SQL engine
* Text joins are used to support multi language descriptions
* If an input parameter is used, it has a default value always supplied
* Star joins are avoided so as to avoid the services of OLAP engine
* Default values are supplied incase of variables or input parameters

Lesson 3 - Discovering and Consuming HANA Live Views


HANA Live browser
* SAP UI5 based app
* Browse, search and tag HANA Live Models
* Identify broken models
* Two modes: User and developer
* View SAP developed models and customer created models
* SAP HANA Live views are always dependent on underlying views and tables being available
* If underlying tables or views are not available then HANA live model cannot be activated
* If a view is not active/broken, it appears under Invalid View tab
* Custom views appear under Undefined node in the application component hierarchy

How to identify if Live Browser is installed or not?


* Package- SAP --> HBA --> Explorer
* Schema - SAP_EXPLR

Architecture
** HANA live browser
Pre-delivered meta isdata
an XS
to layer application
*run the Browsertables to store
Pre-delivered
personalization data

Versions fo live browser


Developer Version Business User Version
Full feature Version Limited feature version
Developer Business User

* In order to access the live browser one or both of the of the versions
should be assigned to user role

Features of Live browser:


Definition Content
Meta data Actual live data thru model
Preview- open definition preview - open content

Access view package requires analytical privilege


* To display all views and tables in a model, use cross reference tab

Consuming HANA Live Views

Reporting Dashboard & Apps Discovery & Analysis


Disb. Info Build Engaging Analytics Discover, Predict, Cover
* Webi * Design Studio * Lumira
* Crystal Reports * Dashboards * Explorer
* Live Office * Analysis Office
* Analysis OLAP
* Predictve Analysis

Rapid Deployment Solutions for SAP HANA Live


* Prebuild reporting content
* Covers many SAP BI reporting tools
* Extensible by customers
* Making changes to Private, Re-Use and Query Views is not recommended

Lesson 4 - Modifying HANA LIVE Models

Guidelines

Hiding Attributes
Note: set the flag, Execute in SQL engine for best performance

How to check if SAP HANA LIVE Extension Assistant is installed or not?


Extend View menu option appears

Extending query view exposes all columns of an underlying reuse views


You can only add columns as attributes but not as measures

You must be assigned the role s a p . h b a . tools . e x t n . roles : : ExtensibilityDeveloper


to extend any view using the Extension Assistant.

Assign persmissions to developer


REPO.READ
REPO.EditNativeObjects
REPO.ActivateNativeObjacts

* A view can be extended multiple times

* Extension Assistant only works on std. SAP HANA Live views, not customer created views

Extension Assistant Restrictions


* Cannot extend query that contains union node
* Cannot extend if intermediate aggregation nodes are present

Lesson 5 - Building VDM with CDS Views


Best practice would be to define global CDS views and then reuse them in different projects

CDS Views HANA LIVE


* Code * Graphical
* Data access rules for auth * Analytical privileges
* ABAP Workbench in Eclipse Editor * HANA Studio or Workbench

@ symbol are known as annotations and are used to define semantics

HANA CDS View ABAP CDS View


* Data security by Analytic Privilege * Built & stored in ABAP dictonary
* Native to HANA and can be used only * Data security by ABAP authorization
in HANA apps mechanisms
* HANA Studio or Workbench * ABAP Workbench in Eclipse Editor

Note: S/4HANA uses ABAP CDS views

CDS View Creation


DDL QL DCL
Additional Modelling Possibilities
Lesson 1 - Implementing Full Text Search and Text Analysis

Full Text Search


* Extract sturctured info from unstructured data
* Rank search results
* Full Text and fuzzy search studio modeller and SQL script enhancement
* Python based Text Analysis script sets
* HANA Information Access toolkit for HTML5

Benefits of Full Text Search


* Is built-in HANA
* Less data duplication and movement
* Easy-to-use modelling tools
* Build search apps

Text Processing

Search Analysis Text mining

String Foundation Key term


matching & for identification
full text search and Document
search mining Categrization

Fuzzy Search or Fault-Tolerant search


* Gives max possibility approximate matches to a given string
* Is fast and fault-tolerant search
* Is applicable only to attribute view
* Fault-tolerant search in text columns, structured dbase content and for duplicate records

Note:
Fuzzy search queries are handled by search engine
Linguistic processing is handled by Text Processor

UI Toolkit
* Provides UI building blocks
* Used to develop search apps
* Toolkit is based on HTM5 and JavaScript

* In case of full-text search, CDS docs contain syntax to create the following;
> Full-Text Indexes
> Search models via ODATA
Define Full-Text index in CDS --> Define search view in CDS --> Full text searching in integration
(ODATA, API)
The SlnA API is a client-side JavaScript API for developing browser-based search User Interfaces SinA - Simple Info Access
HANA File Loader
* Set of HTTP services to develop your own apps to search in file contents
* File loader is located within XS engine
* Supports the loading of file contents for search
* Used to load binary files into HANA

Overview of Text Analysis


* Text analysis is Python based scripts
* For sentiment analysis, we require FULL TEXT Index on a column table
* Individual text analysis options are grouped into text analysis config - stored in repo in xml format

Grammatical Role Analysis (GRA)


* Is a feature of text analysis
* Available since SPS11
* Flag Subject, Verb and Object (known as triple)
* Used to apply deeper linguistic analysis
* Results of a text analysis are stored in a table

Text Analysis Language Support


* Text analysis is at word level
* Result of a text analysis is stored in table
* Tokenization
* Stemming
* Parts-of-speech tagging

Introduction to Text Mining


* $TM_MATRIX is the system table for document matrix
* This table can be queried thru SQL
* Text mining works at the document level
* Text minning can be enabled column by column

Functions based on Vector Space Model


* Identify a similar doc.
* Identify key terms of a doc.
* Identify related terms
* Categorize new docs based on a training corpous

Executing Query
* HANA provides server-side JavaScript interface for text mining engine
* Data should be in column table
* Enable full-text index on column table
* Turn on Text mining
Note: getrelatedterms, getrelateddocuments
SQL Extensions for Text Mining
* Hana provides SQL functions to retrieve text mining statistics & enable data categrization

Lesson 2 - Defining Full Text Indexes


* Full Text index support search on columns

Full text index are automatically created during table creation for TEXT,
BINTEXT and SHORTEXT data types for the rest we have to create it manually

Full-Text Index
* Search includes; normalization, tokenization, word-stemming, part-of speech tagging

Note:
* Fuzzy search works out-of-the-box on the following column store data types;
> Text
> Short Text
> Varchar
> Nvarchar
> Date

Creating a Full-Text Index


* Full-text index is an additional data structure that is created to enable fast text searches

To check paramteres for an existing full text index, you can query the monitoring view sys.fulltext_indexes

Note:
In-order to enable fuzzy text index, full-text index should be created first

* To use HANA studio for Search modeling, enable Search Options in Preferences
* Search attributes are not supported in graphical calc. views

Free style Search Search a term in multiple columns


Weights for Ranking 0.0 and 1.0
Fuzzy search requires 10% of memory size of the column
Fuzzines Threshold 0.0 and 1.0
Default value is .8

Lesson 3 - Using Full Text Search


Full text search - Contains function in where clause

Text Type fuzzy Search String Type fuzzy search


* Texts are tokenized *String comparision
* Search comparision is carried on term by term * Supports basic fuzzy string search
* Fault-tolerant search against user input
Score and Threshold are different

Date Type Searches

Note: Use Freestyle search to perform search for a string across mulitple columns

Lesson-4 Geospatial Data


* Spatial data is represented as 2D or 3D geometries

Sptial features supported are;


* 2D and 3D vector types
* 2D & 3D are represented in the form of Point, line string, polygon and compund polygon
* SRID (Spatial Ref. ID)
* App dev using XS engine with geo-content and mapping services

* Geo queries are handled by spatial engine which is a part of cal. Engine
* Calc. engine is embedded in Index Server

* Spatial data is stored using ST_GEOMETRY and ST_POINT data types


Geo Data Types
* Point
* Line
* Polygon

Note:- Spatial column can be added to an existing table

ST_POINT Data Type


* Geospatial data is entered using Well Known Text (WKT) format.
* Other ways of entering and retriving geographical data are as follows;
> Well known binary
> GeoJSON
> ESRI Shapefiles

* Spatial Clustering groups a set of points, that meets a certain criteria into clusters

Spatial clustering algorithms


Grid
Good for first impression
Easy to use
Extreme fast grid clustering
DBScan
Best for non-spherical clusters
Density based
Higher complexity, better insights
K-Means
Best for spherical clusters
Centroid based
Higher complexity, better insights

Geocoding Features
* Geocoding will be provided by an external provider and Outbound connectivity is provided by XS

Spatial joins in Calc. Views


* CDS supports spatial data types and functions

SPS12 - Calc. columns on spatial data types (ST_Geometry and ST_POINT)


SPS12 - Use spatial expressions in Filters
Lesson 5 - Predictive Models

Application Function Library


2 sub-libraries
Business Function Library (BFL) Predictive Analysis Library (PAL)
Can be called from procedures Called from procedures
Push down to HANA Layer 9 data minin categroies
Developed in
C++ language
Reduced coding and Data Sources;
better performance
Tables
Views
Output from other procedures
Pre-requisites
Requires AFL
Script server to be started
role afl_sys_afl_aflpal_execute
Cross check PAL installtion
3 public views
• sys.afl_areas
• sys.afl_packages
• sys.afl_functions
Application Function Modeller (AFM)
.hdbflowgraph to identify predictive mode
* Used to develop diff models, includes ETL flows, data streaming flows, flows that call
business functions and functions from PAL
* We use flowgraph type file to develop flows

Decision Tree
Is one type of predicitive models
* A decision tree produces an outcome based on input variables.
* Need to be trained first before deployment

Lesson 6 - HANA Graph


New feature of SPS12
Key objects of Graph
> Vertices - Nodes
> Edges - Relationship between nodes

Vertices are stored in column tables and they represent the nodes in a graph
* In each vertex we can store multiple attributes
* Edges are stored in column tables and describes the line between members
* Vertices identifies the nodes
* Edges identifies the relationships between the nodes
* Possible to have multiple edges between two vertices
* Edges can be bi-directional and is between the vertices
* Tables of vertices and edges does not need to reside on the same schema
* Graph workspace contains the metadata
* Activated workspace resides in the system table SYS.GRAPH_WORKSPACES

* HANA Graph Viewer tool is used to create graphs


* Graph viewer is HANA's native app and is built with UI5 and utilizes HANA XS
* Graph viewer uses WIPE language (Weekly structured Information Prcoessing & Exploration)
* Uses a dedicated graph engine
SinA - Simple Info Access
Management and Administration of Models

Lesson 1 - Processing and Analyzing Information Models

Valdiating and Activating Information Models


Rules on validation of models:
Warning Error
* Do not prevent activitation of the info. model * Most critical
* Helps to apply best practices * Models cannot be validated

1. Setting preferences for validation rules


Path to set preferences;
Windows --> Preferences --> SAP HANA --> Modeler --> Validation Rules

Note:
When you execute the validation of information with one or several validation rules
disabled, you never get the job status, Completed Successfully.
If the validation actually succeeded, the job status is Completed with Warning, to
remind you that one or several rules were disabled during validation

Analyzing validation log


* Job Log View --> Current Tab --> Double click job row
* Job Log View --> History Tab --> Logs appear for a period

Note:
Validation Log provides all the rules that generated error or warning.
Validation Log provides the rules that were not checked based on the settings

Activating info models


* When you create an info. Model, you work on the inactive version of the model
* Activation of a model, creates the column view of the model in schema _SYS_BIC
* Column views are the runtime info of the views

When modifying an active version, there are 2 versions of model available;


Inactive Version Active Version
* One you are modifying * Column views in _SYS_BIC schema
* Can save and activate it * Can be used for reporting

Note: You can switch between active and inactive versions

Object Versions - Comparing versions


* Active and inactive versions are available in different screens.
* If active and inactive is present, then active view is in read only mode
* If need be, you can revert to active version by discarding all changes
* HANA dev. Perspective allows you to revert to any local version of the info model
Further details in HANA Models
* Possible to display the history of an info model
* History - entry for each activated version, name of user who activated the model and
its activation date

Cheking model references


* Possible to check where an info object is used in modeler
* Useful function to check the impact of changes on data model
* Path - Right click object --> Where Used
* Where Used list displays the type, name & location (package) of each selected info model
* Also see summary, the total no. of objects in list

Auto Documentation
* Generate doc. Automatically on an info. Model
* Doc. Provides a list of all objects contained in a package & detailed design of each model
* Each info model gets its own Auto Doc. File (pdf)
* To generate - right click on an info model or directly the button
Note:
Only info model from package is included in auto doc. Not all contents in a package
Output consists of one or several pdf files
Included Excluded
* Attribute View * Procedures
* Analytic View * Decision Tables
* Calculation View * Analytic Privileges

Auto Documentation - Select Document Types


* Two doc. Types are available
Model Details Model List
* Display each particularity of * Display a list of each component of
an info object the package

Auto Doc.- Adding Objects to Target List


* All info models in the content are available
* Select one or several objects using Add button
* Use remove button for deselection
* Add objects from diff packages in the same auto doc job

Auto Doc. - Select Export Type and Save to Location


* Only pdf is supported, cannot change the export file type
* Choose target location
* A dedicated folder is created in the Target Location for each System ID
Performance Analysis Mode
* Helps developers to analyze models from performance standpoint
* Can be switched on for a specific model
* Provides;
> Join and Cardinality validation
> Table Partitioning
> Partition Type
> Number of Rows
* Can be switched on manually or automatically whenever you open a model

Performance-Related Validation Rules

Lesson 2 - Managing Modelling Content

Schema Mapping
* A schema is used in SAP HANA to group database objects, such as Tables, Views, or
Procedures.
* Define schema mapping before transporting models from one system to another

Map source schema to target schema


Authoring Schema Physical Schema
* Schema name in the source system * The schema name in the target system

Creating a Schema
* Use Create Schema SQL statement in HANA Studio
* Schema created and named for each user when the user id is first created

Schema Mapping - Defining Package Specific authoring achema


* Possible to define, package specific authoring schema check in system
* Useful when 2 diff authoring schema are mapped to the same physical schema
* Schema property of a catalog object shows the physical schema and the mapping
to the authoring schema (if any). This can be modifed check in system
* When several authoring schema is mapped to one physical schema,
we can define the default authoring schema at package level
Exporting and Importing Info. Models
* Info models stored in packages
* Export and import packages using Delivery Units

HANA Content Transport Capabilities


CTS - Change and Transport System
HPA - HANA-Powered Applications

In the context of Data Marts:


* 2 step integration into CTS+ ("loose coupling")
> Manual preparation (server-side)
> Automated transport and deployment in target system via CTS+
* Potential for ABAP based new apps (HPAs)
> TLOGO based transport
> Encapsulating SAP HANA content in ABAP objects allows to transport SAP HANA conent
with application code through CTS mechanisms

In the context of SAP NW 7.3 BW, powered by HANA


> Leverage existing transport functionality

Creating a Delivery Unit


* Use delivery unit to;
> create a group of transportable objects for content delivery
> Export info. Models from source system to target server
*Path Quick Pane --> Delivery Units

Delivery Unit Properties


* Name
* Responsible User
* Delivery Unit Version
* Support Pacakage version of DU
* Patch version of DU
* Optional PPMS ID
PPMS - The Product and Production Management System is the central knowledge
repository for info. regarding SAP software products and their software components

Exporting Delivery Units


* Export all the packages, the relevant objects in the packages to the client or
HANA server file system check in system
* Exporting objects using developer mode
> Function to export individual objects to a directory in your client computer
> Use exceptionally as it does not cover all aspects of an object

Options: Export to Server or Export to Client


Filter by Time: Use a Form date that corresponds to the last date of full export

Importing using Delivery Unit


* Import objects from client location or server in the form of .TGZ file
* Import objects using developer mode
> Import objects from a client location to HANA modelling environment

Note:
* Requires to grant SELECT privileges to user _SYS_REPO in order to activate imoprted objects

Copying info. Models


Simple Copy Mass Copy
Ctrl + C & Ctrl + V * Copy several info models @ once
* To copy content delivered by SAP
Procedure
1. Quick Pane --> Mass Copy
2. Maintain Package Mappings
3. Choose objects that are to be copied
4. Check "Copy as calc. view" (optional)
5. Finish
Note:
During Mass Copy, if an object already exists in the target package,
it is by default excluded from the copy

Privileges required to copy content


* REPO.READ on the source package
* REPO.MAINTAIN_NATIVE_PACKAGES on the Root Package
* REPO.EDIT_NATIVE_OBJECTS on the Root Package
Renaming an Info. Model
* Possible to rename
* Only one at a time
* After renaming, activate the new object and activate the deletion of previous objects

Renaming columns of an info. Model


* In semantic node, üse the feature "Rename and Adjust Reference" -->
Applicable only to graphical models (its an icon in Semantic node)

Moving Info. Models


* Use Refactoring functionality to move content objects form one package to another
* Move single object from one package to another or move one package to another
* If a package is moved then the seletced package becomes the children packages of
the target package
* While moving a package, if one object prevents moving due to an issue then
the entire package cannot be moved
* Auth: Read on old package, create and activate on the new package
* Refactoring is run as a job
* References are maintained for dependencies
* Cannot re-factor an inactive info. Model

Limitation to Mass Copy and Re-Factor Functionality


* SQL syntax of procedures and script-based calc. views are never adjusted
* If the syntax references directly to a column view of the _SYS_BIC schema,
it is not adjusted automatically
* If a graphical info. View references directly a column view of _SYS_BIC schema then
the references are not adjusted automatically

Switch Ownership
* An inactive object of another user can be taken over by you using the "Takeover" feature
* Takeover enables you to become the owner of the inactive object
* Active version of the info. Model is owned by the last user who has activated it
* Use "Switch Ownership" command to modify the ownership of several objects
owned by the same user on one single operation

Notes:
* Switch Ownership allows you to takeover an inactive object that was never activated
* An info. View that was never active is visible only to its owner, so another user
cannot open it from system view

Translating Metadata Texts for views


* Maintian a number of view elements in several languages
Translatable View Labels:
* View Description
* View Nodes
* Elements (Column labels)
* Hierarchies
* Variables and Input Parameters

Note:
* The following should be active in-order to translate the metadata of a view;
> Translate property of the view in the semantics node
> Information view

Lesson 3 - Converting Deprecated Information Views

Migration Utility for Attribute and Analytic Views


* Migrate utility for the following deprecated models;
> Attribute and Analytic View
> Scripted Calculation View
> Classical Analytic Privileges

Migrate Attribute and Analytic views to Calculation Views


1. Start Migration utility: Quick View --> Migrate
2. Choose Object Type
3. Choose the view to be migrated (one object or entire package)
4. Choose Migrate and Replace (default behaviour) or simulate (Copy & Migrate checkbox)
5. Review the list of impacted objects
6. Run migration job and check log

Note:
* After migration, the workspace contains 2 inactive versions of the object
1. The former attribute or analytic view
2. The new calculation view

Migration Log
* Records the main changes
* Info provided by migration log
> Total no. of objects selected for migration
> Total no. of objects successfully converted
> List of objects that require user action before activation
> List of impacted objects

General Considerations before Migration


Object Dependencies and Pacakage Structure:
* All attribute views referenced by an analytic view must be migrated before
the analytic view can be migrated
* A migrated attribute view can be activated only when all the analytic views referencing
it have been migrated
* The references to migrated objects in calculation views are automatically adjusted during
activation, even if the package containing the calc view are not selected for migration

Migration Process Recommendation


* Several migration steps can be executed to reach a consistent state
* Then all migrated object can be activated at once
* This allows you to cancel migration and revert to previous status if need be

Features with diff. behaviour in calc. views


* Hidden columns can be made visible during migration (Option in migration utility)
* The internal row.count column that existed in analytic view is made visible in the
resulting calc. view, as the count star column in its properties
* A derived attribute view is converted into standalone Dimension Calc. View, with
no dependency to the main view
* Column filters in Attribute and Analytic Views are converted to Filter Expressions
in the new Calculation View check in system
* Input Parameter based filters in the data foundation in attribute and analytic views
are converted into additional Projection Nodes with Filter Expressions.
* Relational optimization setting in analytic view is dropped during migration

Secure Migration
Key recommendations for migration process
* Take a backup of all objects before executing the migration
* Start migration in a clean workspace with all views activated
* Simulate before migration
* Migrate info. Views before analytic privileges
* Undoing is only possible before activating the objects

Migrating Scripted Calculation Views


* Scripted Cal. Views can be
> migrated to Table Functions
> Wrapped into Graphical Calc. Views
check in system

check in system
check in system
Note:
Takeover and switch ownership in system
check in system
Unit 7 - Security in SAP HANA Modeling

Lesson 1 - Understanding Roles and Privileges

Overview
Create Users
Assign Security - Control access to objects, Row-level security, restrict allowed actions
Manage Users

Users, Roles and Privileges


* User owns objects in dbase
* Role is collection of privileges and can be granted either to an user or a role
* Privilege, enable a specific operation on one or several objects
* Privileges can be assigned directly to user or to a role
* Roles - to structure access control and model reusable biz roles

* Recommended - assign privileges to role but not direclty to user

Defining Roles

Catalog Roles Repository Roles


* Run time objects * Design-time objects
* Follows classic SQL principles * In the repo of HANA dbase

Note:
Recommended - Repository roles
Repository Roles are created as .hdbrole files within the SAP HANA Developer
perspective. They cannot be created from the Modeler perspective.

Assign Privileges

HANA Studio SQL Console


* Dedicated Secuity folder * Execute SQL statements
Managing Users and Roles
Steps
1. Define and create roles Roles Analytical Users
Previleges
2. Assign privileges to roles
3. Create Users
4. Grant roles to users

Define Privileges

Types of privileges
Object privilege - on database objects
Analytical privilege - row level auth
Package privilege - Restrict access to and the use of packages Important for modelers
System privileges - admin tasks
Application privileges - for HANA native apps

System Privilege
* System based activities
* From modeler point - for maintaing, importing & exporting DU, creating, deleting schemas

Object Privilege
* Allow access to and modification of dbase objects
* Each object privilege is related to one object
* SQL statement types differ based on the object
* One user should have object privilege so as to grant access to another user

Package Privilege
* Allow access to and modification of info models in pacakages
* Actions on package: Read, Edit, Activate and Maintain
* Need package privilege to work on a package
* If you grant privileges to a user for a package, the user is automatically
also authorized for all corresponding subpackages

Analytic Privilege
* Is based on attribute value for all info views
* Must be defined and activated before being assigned to roles/users
* Can be assigned to info views
* Implemented for row level security
* Restrict acces to a data container based on attribute value
* Restricion supports, single value, range, and In List
* can be assigned to one or several attributes
* To be applied only on attributes but not on measures
* Multiple AP on single column is combined with AND
* Multiple AP on multiple column is combined with OR
* All filters except IS NULL and IS NOT NULL accept empty strings
* Only run time object can be assigned to user/roles
* _SYS_BI_CP_ALL is the built-in AP of HANA
* Detailed info of AP error can be found in trace file of Index server

SQL AP Classical AP
* More fleixibility where filter condition
are complex * Also called XML based AP
* Define attribute restrictions based on hierarchies * Historical data access restriction type
* Secure CDS views using SQL AP * Is deprecated

Notes: Classical AP can be migrated to SQL AP using migration tool

* Using SQL AP, access to hierachy can be provided


* Providing access to a node in hierarchy provides access to
all descendant node in the hierarchy

Privilege on users
* Used for debugging purposes
* Grant "Attach Debugger" to another user to allow them to debug an external session

Dynamic Analytic Privileges


* Determine filtering conditions using catalog/repository
* This analytical privilege is called as dynamic AP
* Determined at run time
* Scalable approach
* Same AP can be granted to several users for diff reqmnts

Classical Dynamic AP vs. SQL Dynamic AP


* Both can be created as dynamic AP
* These use procedures

Using Procedures in dynamic analytical privileges


Classical Analytical Privilege
* Each attribute can have a dynamic definition based on procedure
* Combine fixed and procedure based filtering conditions on diff attributes

Procedure Parameters of Classical Analyitcal Privilege


No I/P parameter
1 O/P Parameter for type Table one single column, IN Operator
1 O/P parameter for type scalar for unary operators, EQ
2 O/P parameters of type scalar for binary operator, Between

SQL Analytical Privilegs


* Only one procedure is used
* Procedure must return a character string that is an SQL WHERE caluse

Procedure properties of SQL AP


* Procedure must be read only
* Security mode must be definer
* No input parameters
* Only one scalar output parameter of type Varchar or Nvarchar

Notes:
* HANA change recording is part of HANA Lifecycle management
* Roles that are specific to HANA Change Recording are as follows
• REPO.CONFIGURE
• REPO.MODIFY_CHANGE
• REPO. MODIFY_OWN_CONTRIBUTION
• REPO.MODIFY_FOREIGN_CONTRIBUTION

Native Pacakage Imported Pacakage


* To be edited in the current system * Should only be edited by importing updates

Object Privilege

There is no need to grant select privileges on the underlying views or tables. The
end user only needs to be granted select privileges on the top view of the view
hierarchy.

Data Access Security for HANA LIVE


Object Privilege
* Control access to SQL query view using Select Object privilege
* Never grant access to non-query SQL Views to biz users
* Never grant access to tables directly to biz users

Analytic Privilege
* Row level filtering
* All underlying non-query views are explicitly excluded from analytic privilege
** Gen analytic
Analytic privilege
Privilege for ausing
queryAuthorization
view always Assistant tool based on PFCG auth
only one model

Query Views Non-Query Views


* Exposed to end user
* User should not access to the views
(reuse/private views)

Managing Security on Activated Content


* All activated objects are owned by the tech user _SYS_REPO
* There are procedure to Grant or Revoke privileges related to activated content
* These procedures can be called by SQLScript in SQL Editor
* These procedures are owned by _SYS_REPO

* In order to call these procedures you must be granted EXECUTE privilege on them

User access to preview or query views in Studio or reportin tools;


* Select privilege to info. View or containing schema
* Granted access to analytic privilege that is applicable to the view

Lesson 3 - Generating HANA Live Privileges


* SAP HANA Live Authorization Assistant is used to provide auth to users
* It considers the ABAP based auth provided to users in business suite
* This auth assistant can also generate auth for any HANA view
Anaytic
An easy Metadata
way to check if the tool is already installed is to find out if the SAP HANA studio menu
option Analytics Authorisation is present.
Tools supplied with HANA Live Analysis Auth. Asst (AAA)
Generate Analytic Privileges
Update Analytic Privileges
Maintain analytics metadata

Migrating Analytic Privileges


Use cases for manual steps when migrating classical AP to SQL AP
* When the source Analytic Privilege is designed to Apply to all info. Models
* When the source AP is dynamic (catalog/repository)
* When one source AP is converted to multiple SQL AP

Key Recommendations for the Migration Process


* Take backup b4 migration
* Start in a clean workspace with all info views activated
* Info views shud be migrated before migrating AP
* Undoing migration changes is only possible b4 activating the objects
Users Roles to Users
Unit 8 - Optimization of Models

Lesson 1 - Applying Good Modeling Approaches

Diff types of engine for data processing


Join Engine:
* Process joins betweens column tables (not row tables)
* Takes care of other basc functions such as filtering, aggregations and sorting

OLAP Engine:
* Used for calc and dimensional aggregations based on a star schema
* Is called from analytic and calc. view that is based uses a star join

Calculation Engine:
* Is used on top of OLAP engine and join engine
* For complex calculations

Spatial and Graph queries are processed by the their specific engines

SAP HANA Engine Overview II

Note:

Use "Explain Plan Tool" to identify which enigne is/are used


Right click on view and sel
Explain Plan

Best Practices for data modelling


In Client Tool
* Avoid transfer data of large result sets between the SAP HANA DB and client application
* Do calculation after aggregation
* Avoid complex calc. expressions. Instead push down into model

In Views/Procedures/Functions
* Aggregate data records (e.g. Group By, reducing columns)
* Join on Keys Columns/indexed columns
* Avoid calc. before aggregation on line item level
* Replace sequential cursor looping with set processing

In Column Store
* Filter data amount asap in the lower layers

Filter and Aggregation Push Down


* Ensure that filters are pushed down
** Aggregate and ensure
Recommended Group
to perform By areand
filtering pushed down ASAP in the calculation
aggregation
scenario

Dedicated List of Values Views/Tables for Variables and Input parameters


* Recommended to use dedicated views/tables to generate lists of values for
variables and inputs with the column parameter type
* Best practice to ensure that the list of values is consistent from one view to another

SQL Execution of Calc. Views

* When the "Execute In" property is switched to SQL Engine in a graphical


calculation model, the Calculation Engine tries to generate SQL from each node
during runtime

Rules for optimal SQL gen


For optimal, follow the below rules;
* Do not include any source other than tables or Grpahical Calc. Views.
* Explicitly do not add any Analytic View, Attribute View or Scripted Calc. View,
this might affect optimization
* For operations, only use projection, union, join, or aggregation
* Do not use special column store functions in filter or calc. attribute expressions such as functions date () and time ()
* If currency/unit conversion or counter measure is to be defined then
it has to defined in the output node of the main scenario.
* If currency conversion or counter measure is modeled into an included scenario,
then optimization will be broken

Best Practices when writing SQL


* Reduce complexity of SQL statements by using variables.
* Apply multi-level aggregation by using grouping sets scenario to be checked
* Reduce dependencies by avoiding imperative statements
* Stick to declarative statements
* Avoid mixing CE plan operators with standard SQL
* Keep relational and CE plan operators apart
* SAP recommends to use SQL rather than calc. engine operators with SQLScript

Lesson 2 - Understanding the cost of SQL statements

* Analyzing query plan allows you to identify long running steps.


* Studio provides 2 tools for query plan analysis
> Plan Explanation
> Plan Visualization

Explain Plan
* Generate Plan Explanation for any SQL stmt in SQL Console of HANA Studio
* Use to evaluate execution plan
* Helps to identify the engines used in the execution
* Avoid calling multiple engines
* Provides the essential visibility of the execution engines and the no. of times and
in the order they were called

Plan Visualize
* Helps to understand and analyze the detail of an execution plan of an SQL Stmt.
* Two modes of the tool
1. Prepare 2. Execute
* Prepares a data flow with an
estimated run time * Excutes the SQL and provides the
for all steps actual run time values
* Results are presented across 2 tabs;
Overview and Executed Plan

Plan Visualize Execute — Overview tab


* Infos useful to Modelers in Overview Tab
> Total execution time
> Top 3 longest running steps
> How many tables are accessed
> How many records were read from each table
> How many records are prodcued in the result

Plan Visualize Execute — Executed Plan tab


* Infos useful to Modelers in Executed Plan Tab
> The sequence of steps in the flow
> Time taken on each processing step
> Cumulative time taken to reach each step
> How many rows are passed between steps
> Which operator is called at each step

* You can drill down, zoom-in and zoom-out


* Visualize Plan tool is accessed from SQL Console
* Execution times are always shown as inclusive and exclusive

Timeline View
* Understand where the most time spent in the execution of your SQL
* Provides a complete overview of the execution plan based on the visualization
of sequential time-stamps
* Following infos are available
> X-axis - time elapsed since query execution
> Y axis - list of operators
> Duration of each operator is represented in rectangular box
> Nature of each time stamp

Other Views for Analysis of SQL runtime


Operator List:
* Lists the details of all operators within a current plan. This is also a great way to identify
long running steps within a flow
Network View:
Visualizes sequential n/w data transfers between diff servers
Tables Used:
* Cross checks your business understanding about which tables are needed to
fullfill a given SQL statement
Performance Trace:
* Displays the list of operators that have performance trace data.
Data can be viewed by selecting any of the operators shown in the list.
Right click on view and select "Generate Select SQL"--> Right click on query and select
Explain Plan
tions date () and time ()
SAP HANA-Based Accelerators

* Types of applications or scenarios that extend the capabilities of business processes in SAP
Business Suite systems by leveraging the performance & scalability advantages of SAP HANA.
* The typical approach for accelerators involves replicating data for data-intensive operations
for the given operation in an SAP HANA table.
* A type of “switch” is then set in the SAP Business Suite application to indicate that whenever
these specified tables are read, the read operation will take place in SAP HANA using a
"secondary database connection".

SAP HANA as Data Mart


* Is an industry term for a repository of data gathered from operational data originating
in transactional systems and/or other sources
* With SAP HANA, operational data marts offer real-time analytics and reporting on data
replicated from a transactional system’s database.
* The raw tables themselves are copied from the transactional system’s Dbase into HANA

Vous aimerez peut-être aussi