Vous êtes sur la page 1sur 170

HANA the Hot cake of the market.

I have been hearing about HANA since the beginning of


this decade or even earlier. Initially I thought it was just a new database, so why the
fuss? My crooked mind used to say: may be SAP does not want to share the market revenue
with any other database provider (competitors); therefore they came up with their own
database. Pat SAP for Smart Business Acumen.

Later I had a notion that HANA is only for BI/BW folks, so being an ABAPer why should I
care?Everyone used to talk about analysis and modelling. So, I used to think, let the BI/BW
modelers worry about HANA.
Then the rumour started in market; ABAP and ABAPer are going to be extinct in near future. I
used to wonder, if ABAPer are going to die, then who in this whole universe would support
those tons and tons of ABAP code written in the history of SAP Implementations? What will
happen to all those time, effort and money spent in those large and small scales SAP
Implementations? What a waste of rumour!!
I have spent more time in researching what is HANA than actually learning what HANA is.
Internet is full of information regarding HANA but finding the right answers for your curiosity
or doubt, is an uphill task.

I had some silly questions for HANA but felt a little embarrassed to ask the experts. I spent
and wasted lots of time trying to figure out what is HANA and who needs it and why?

Some of the questions which I had and I am sure all novice in HANA would have the same are
below:
Q. Is SQL a pre-requisite to learn HANA?
Q. Without SAP BI/BW/BO knowledge, can I learn HANA?
Q. Is SAP ABAP skilled required to learn HANA?
Q. Is HANA for functional folk or technical folks or modelers?

Please find answers to these SAP HANA doubts from a beginner in HANA to another beginner in
HANA. They might not be very technical or in-depth, but it would definitely be enough for a
beginner and I am sure the new comers would appreciate these selective information.

Q. Is SQL a pre-requisite to learn HANA? (Being an ABAPer, this was one of the most feared
question for me)
Ans: No.
SAP HANA is like any other relational database. Having Database Concepts and basic
knowledge of SQL before starting SAP HANA is an advantage, but it is not a pre-requisite. You
can always catch up with these concepts while learning SAP HANA.
Q. Without SAP BI/BW/BO knowledge, can I learn HANA? (I am sure all ABAPers have this
question)
Ans: Yes.
BI is the Data Warehousing package implementation tool from SAP. Data Warehousing
Concepts in SAP BI will help understand the implementation aspects from BW on HANA
perspective. But unless you plan to a BW on HANA consultant, you necessarily do not have to
learn BI.
Similarly BW and BO are Business Warehouse and Business Object respectively. If you have
prior BW experience, understanding modeling concept and transferring data SAP Business
Suite System to HANA would be childs play for you. But, we can easily learn HANA modeling
concept even if we do not have current exposure to BW. But it would be a must for those
consultants who are eyeing the role of BW on HANA expert.
By now, I have understood that BO is a front end reporting tool. Prior knowledge in reporting
tools would be an advantage but, we can always learn BO concepts while learning HANA.

But, if you already have BI/BW/BO knowledge, then BW on HANA work would be the role you
would be targeting to (if you are planning to shift to HANA).

Q. Is SAP ABAP skilled required to learn HANA?


Ans: No.
Whatever we said above for BI/BW/BO is applicable to ABAP as well.
If you are an SAP ABAP programmer, then implementing the business logic and model would
be fun for you. You must have already heard about SAP ABAP on HANA. Lets put a full stop
to the rumour that ABAPer are vanishing. With HANA, ABAPer would be smarter and more in
demand. Only ABAP on HANA consultant would need ABAP knowledge as pre-requisite.
Q. Is HANA for functional folk or technical folks or modelers?
Ans: All.
Like any other technology, HANA also has segregation of duty, therefore the answer to this
question is ALL. Some of the HANA job roles are as below:
i) HANA Admin and Security
Our current SAP Basis/Security/GRC guys would be the nearest cousins of HANA Admin and
Security folks.
ii) HANA Data Replicator
Like in normal SAP Implementation project we have Conversion and Interface team and
experts, the HANA Data Replication role would be similar to that. SAP BI/BO guys are the
closest. They will use jargons like SLT, BODS, DXC etc.
SLT = SAP Landscape Transformation
BODS = Business Objects Data Services
DXC = Direct eXtractor Connection
iii) HANA Modeler
SAP BW gurus are already modeling, so will SAP HANA Modelers.
iv) HANA Application Developer
HANA XS or ABAP on HANA Developers.
Q. HANA means in-memory. In-memory means RAM. We all know, RAM is volatile
temporary memory. Does it mean all data would be lost when power goes down, or there
is reboot etc i.e. if there is a hard or soft failure?
Ans: No. SAP must have thought this even before they started the development. (I

cannot be smarter than SAP)


Data is stored in RAM, that is right. But on power failure for any reason, data is not lost. Here
comes the concept of Persistent Storage.
Transaction data is updated to Log Volume on every commit. Data is saved to Data
Volume every 300 sec or as configured. These create savepoints.

In case of reboot or power start up, system can be taken back to last consistent save point
and then replay the Log Volume committed data.
Q. SAP HANA claims to be so fast. Which programming language is it written in?
Ans: World famous C++.

Q. What is the Operating System of SAP HANA?


Ans: Suse Linux Server SPX & Red Hat linux Server 6.5
Q. Another question which I always had was, if HANA is about RAM, so can we increase the
memory size of traditional database and get similar performance like HANA?
Ans: No.
We would definitely get better performance if we increase the memory size of traditional
database, but it would not be comparable to what we get in HANA. But Why?
Because, HANA is not just about database. It is a hybrid in-memory database which
is combination of niche Hardware and Software innovation as stated below:

In-Memory storage (RAM): Processing data from RAM itself is 1 million time faster than
accessing data from hard disk. In practical scenarios, it might is around 10x to 3600x time
faster. Also, in todays world RAM is cheap and affordable expense wise.
Trivia: Read time in RAM: 2 MB/ms/core (2 megabyte per millisecond per core).
So to scan 1 GB of data, it would approximately take 0.5 s/core. For 100 GB it would take 50
s/core. If you have 50 cores in the hardware, scanning 100 GB data would take just 1 second.
Huh!! Quantitative numbers always clarifies better than paragraphs of sentences. Isnt it?
Multi core Architecture, Partitioning & Enormous Parallel Processing: Servers are available
with one node up to 64 cores (and even more). So partitioning the data foot prints in
different node and running the query parallel is the innovation which HANA uses so
effectively. This is perfect example of both hardware and software innovation.
Columnar Storage: Contiguous memory allocation
Faster reading with sequential memory access. Remember, column store not only makes
reading faster. HANA has built the column store is such a way that it is efficient for both
READ and WRITE.
Quick aggregation (normally aggregations are expensive) and also supports parallel
processing.
Searching in column store is must faster than row storage (provided you are selecting only
some sets of columns, not all).
Data Compression: Minimize data footprint through Compression i.e. less data movement
means faster performance.
Idea is remove repetitive data, build a vector for the data and point it with an integer (and
integer is less expensive than reading a string).
Q. Ok heard enough of Column Store in HANA. But, how does Column Storage actually
make it faster?
Ans: Column store is divided into three parts: i) Main ii) L2 Delta iii) L1 Delta/cache
Persisted data are saved in Main Memory, all buffer and transaction changes are kept in L2
Deltaand High Inserts / Deletes / Updates etc in L1 Delta
L1-delta
accepts all incoming data requests
stores records in row format (write-optimized)
fast insert and delete
fast field update
fast record projection
no data compression
holds 10,000 to 100,000 rows per single-node

L2-delta
the second stage of the record life cycle
stores records in column format
dictionary encoding for better memory usage
unsorted dictionary
requiring secondary index structures to optimally
support point query access patterns
well suited to store up to 10 million rows

Main
final data format
stores records in column format
highest compression rate
sorted dictionary
positions in dictionary stored in a bit-packed manner
the dictionary is also compressed

So the smart innovation of L1, L2 and Main memory and combination of all three, make data
read and write, really fast and effective.

These are some of the obvious questions, which almost all beginners in SAP HANA have. I had
to dig through different sources to collect and understand these concepts. Hope all these
information at one place would help you to understand it better.
Couple of points to add HANA is Sybase RDBS re-tooled with in-memory columnar
architecture, plus added memory blades and storage (to keep it simple). HANA only runs
on SAP-certified hardware, which are referred to as an appliance. This makes for a
powerful self-contained data server.
Second, to build on the super-fast data access architecture, hardware suppliers such as IBM
and HP (and others) are boosting their appliances with the newest technology that will
further speed up data processes, such as Haswell processors, memory build-ups, etc.

All this together makes for a revolutionary impact to the speed of doing business, and
companies have come to learn that business process re-engineering is a natural follow-up
after deploying HANA. Thats why the world is excited about this!

Finally you are somewhat clear as to what you want to do in HANA. Now a greater
hurdle. Neither your employee organization nor your current project client has HANA
database. So, how would you explore the tremendous power and innovation of HANA? Is it end
of the road? Was your acquaintance with HANA only till here?

Do not worry, there is always a way around. You just need to have the zeal to learn and find
out the alternatives.

When a person really desires something, the whole universe conspires to help that person
to realize his dream.

Paulo Coelho, The Alchemist

The easiest option (and a better one, if you can afford) is to enroll in the authorized SAP
Classroom/Online HANA training session. Consider it as an investment for your future.

But, if you do not want to shell out some $$$ right now or you want to have some bare
minimum knowledge in HANA and then you plan to take proper formal training, you have
another free alternative.

Let us create an SAP HANA Cloud Platform Account.


Step 1. Go to HANATrial web page.
https://account.hanatrial.ondemand.com/
If you have SAP S_USER ID (the same id which you use in service.sap site to search SNotes) or
SCN ID, hit Log On. If you do not have these IDs, hit Register. Fill up the short form and you
will get a link at your email to activate it.

Assuming, now you have your user id and password. Click on the Log On button.

Here you are at the HANA Cloud Platform Cockpit. Ready to fly guys??

Hold on!!

Check your User name and Account name. Note it down. You will need it later to access the
platform from the studio.

Step 2. Create a trial SAP HANA Instance.


You can create just one. Consider SAP HANA instance as a database schema with database
property HANA XS. Click on New Trial Instance, give a name you like and hit SAVE.
Update on 26th September 2016

HCP has changed over time. New users would not see the Dashboard, HANA Instances menu on
the left side of the panel. It would look something like below. Just hit the Databases &
Schema and create New DB/Schema instance. I have used HANA MDC(<trial>) Multitenant
Database Containers DB System.

Make the note of the Password you enter.

Please note: When you add Cloud System, the Database User Id would be SYSTEM and
password is the one which you just set above.
Pre-requisite to play in SAP HANA Cloud Platform: You need HANA studio and client or you
should have Eclipse Studio. In this post, you will see how to download and use Eclipse. HANA
Studio/Client!! you need to wait for some other time buddy..

And the pre-requisite for Eclipse is, you need to have the updated Java Runtime Environment
(JRE). If you are not sure whether you have an updated JRE or not, just download a new one
and install it.

Step 3. Download the Java Runtime Environment (JRE).


Our google drive for JRE. Click here to download JRE.
You can also go to the official site to get the JRE
http://www.oracle.com/technetwork/java/javase/downloads/index.html
Step 4. Download Eclipse.
Our google drive for Luna eclipse. Click here to download Eclipse.
You can also go to the official eclipse site and download it.
http://www.eclipse.org/downloads/packages/release/Luna/SR2
If you plan to go by the official site, the following screens would assist you.
Whether you download from our drive or from official site, make sure you extract JRE and
install JAVA first. After JAVA is installed, extract the Eclipse zip folder.

Click on the eclipse.exe. It would ask for default workspace. Hit ok.

When the Eclipse opens, it would take you to the Welcome Page.

Step 5. Add HANA Tool and HANA Cloud Platform Tool.


In the Eclipse, you need to add New Software to access HANA Cloud Platform. Go
to Help menu and Install New Software as shown below.
Put the url https://tools.hana.ondemand.com/luna/ and hit ADD/Enter to get the Tools.
Select HANA Tools and Cloud Platform Tools. Hit Next, accept T&C and Finish. Your eclipse
would restart.

Get ready for Fun now.


Go to Window menu -> Open Perspective -> Other -> SAP HANA Development.
Step 6. Link Studio to SAP HANA Cloud Platform Instance
You have a fresh studio. Link the studio to the SAP HANA Cloud Platform instance which you
created earlier. Hit the Add Cloud System as shown below.

Provide the Cloud Account name (with suffix trial). System would prompt you for hanatrial
landscape host (hanatrial.ondemand.com). Select it. Give your user name and password.
Caution: I mis-spelled the trial to trail and I got the below error.

Message: The information about SAP HANA schemas and databases cannot be fetched from
SAP HANA Cloud Platform. Check the error log for more details.

I wasted a day to figure out what went wrong. If you get the above pop up error message, you
know, you need to correct your username and account name.

Updated 26th Sept 2016: Multitenant Database Containers (MDC)


Hit next. Select the Schema from drop down which you created in the free cloud account and
hit Finish.

OLAAA!! Your HANA genie is ready for your service. Make her work for you.. Have fun!!
PS: Please work on Schema starting with NEO_ . SAP HANA on Cloud Platform has some
limitation which can be reduced by working in NEO_ Schema. In the next post, I would show
the limitation and how we can go pass it. So, for now, work only in NEO and do not create
anything in DEV_ Schema.
NEO_ = Yes Yes. Play on it.
DEV_ = No No. Do not create anything in DEV Schema.

Lets Practice HANA

I was tempted to end this post here as it is already too late in the night.
But, since you stood patiently installing each and every component, software and tools, how
can you close, before writing your first statement in HANA. Let us create our first custom
table in HANA, populate some data and view them. Buckle your seat belt!!

What are we going to practice now?


1. Create custom table in HANA using SQL (in the next post, I will show you how to create
custom table without SQL code. Just like SE11)
2. View table definition
3. Add data in the custom table using SQL.
3. Display the data entries saved in HANA.

Since we are going to write some SQL statements, right click on your schema and
select Open SQL Console. Trust me, you do not need prior SQL knowledge. Just couple of
keywords which I will provide.

Practice 1: Create custom table.


SQL Code: Self explanatory

1
2 CREATE COLUMN TABLE ZM_MARA (
3 MATERIAL INTEGER,
4 CREATED_ON DATE,
5 CREATED_BY VARCHAR(12),
6 MATERIAL_DESC VARCHAR(40),
7 PRIMARY KEY (MATERIAL)
8 );
Put the above code in the SQL console, change the fields and types if you want your own field
names and hit the Execute arrow Icon. It would save and create your first HANA custom table.

Please note: your table name need not start with Z. Since I am an ABAPer, old habit die
hard.

Practice 2: View table definition


Right click on the table name and hit Open Definition. If you do not see your custom table
below Tables, right click and hit refresh.

Check, Column Store table type is created. We can change this type. We will discuss them in
subsequent posts.

Practice 3: Add data to the custom table using SQL.


SQL Code: Self explanatory

1 INSERT INTO ZM_MARA VALUES (900, '20150917', 'SAPYard', 'RING SEAL TEFLON');
2
INSERT INTO ZM_MARA VALUES (901, '20150917', 'SAPYard', 'Turbine Rotor');
3
INSERT INTO ZM_MARA VALUES (902, '20150917', 'SAPYard', 'Gas Pipe');
4
INSERT INTO ZM_MARA VALUES (903, '20150917', 'SAPYard', 'Motor');
5

Put the above code in the SQL editor and hit Execute button. Check the log below says
success.

Practice 4: View your entries


Right click on the table name and hit Open Content. Check the table has all the entries you
added above using SQL INSERT statement.
Brownie for guys who are a step ahead in practice session.

1
2 CALL "HCP"."HCP_GRANT_SELECT_ON_ACTIVATED_OBJECTS";

If you get error while calling the above piece of statement, it means you have created some
objects in DEV_ Schema. Delete everything ( tables/ views etc) from DEV_ Schema. Do the
same exercise in NEO_ Schema.

Views are integral part of data modeling in HANA. But what does View mean to a non-
technical person?

In the language of relational databases, a View is a virtual table i.e., a table which actually
does not store any data physically, but shows you data derived from one or more other
tables.
Views allows logical cross-sections of data to be generated in response to the needs of
specific applications, so that the required data can be directly and efficiently accessed.
What are the different types of Views in SAP HANA?
1. Attribute View
2. Analytic View
3. Calculation View
Before we go into the nitty-gritty of these types of Views, let me throw a simple
question. Why do we need these three types of views? Let your brain do some
churning..
Have you heard the argument, from which side (USA or Canada) does the Niagara Fall look
better?Oops, what relation does Niagara have with HANA Views..

Why do some folks visit Niagara from the US side while some take the trouble to go to
Canada and why some visit it from both sides?
Answer is to View. Yes, View it from different angles. View it as per ones choice/ecstasy
and perform certain activity along with the View.
If you want to get the gorgeous panorama of the American Falls along with the mighty
Horseshoe Falls, then Canadian side of view should be your choice.
But, if you want a cheap parking area and opportunity to get close and personal with the
waterfalls (including American and Bridal Veil Falls, Cave of the Winds), then American side is
your bet.
If you want to spend some more buck, then you might want to be little more adventurous and
take the Maid of the Mist Boat and get the closer detailed View of the Fall, which you would
not get by just standing near the edge of the Fall.
The data is same i.e. Niagara Fall. But you view and appreciate it according to your
need, accessibility, choice and preference.
Similarly, we use the Attribute, Analytic and Calculation Views, according to our need,
requirement and what we want to do with the data and what we want to see, show and
report.
Attribute View:
If you want to see only text and numeric characters (not quantity/amount), then Attribute
is your view. Usually in SAP, the material number or customer id or vendor id are present in
one master table and their corresponding texts like name, address, contact information are
there in different table. For example, material id is in MARA, while material description is in
MAKT table. When you want to join these two master tables to display the dimensions (text
attributes/characteristics) of the material number, Attribute Views are created.
Please do not confuse numeric characters to numbers like quantity, amount, currency
value.Material/Vendor/Customer number may be 1000101, but it is still character/text
(NUMC). So, you can use them in Attribute view. Attribute view can also be made to view
transactional data but it does not make sense to have an Attribute view of non quantifiable
transactional fields.
Numbers are not for Attribute View.

Analytic View:
If you want to play with numbers, quantities and currencies, then Analytic View should be
your choice. G/L amount 1,000,101.00$ is for Analytic View. PO quantity 10.00 EA of value
4500.00 $ is also for Analytic View.
According to SAP, Analytical views are the multidimensional views that analyze values from
the single fact table (like sales, deliveries, accounting etc) which contains transactional
data. In simple word, Analytic view is typically used for analyzing numerical data and figures.
But, numbers alone do not make sense. If the retails shop says, 143.00$ value was sold, it
does not give any relevant info. But it they tell, 143.00$ worth of material id PV10001,
material name PVC Pipes 1 in were sold to customer id 900499 customer name SAPYard
Groups.
In the above hypothetical example, 143.00$ is the measure or fact or transactional data,
while material id PV10001, material name PVC Pipes 1 inand customer id 900499 and
customer name SAPYard Groups are dimensions, characteristics or attribute data.
Fact Table + Attribute Views and/or Tables = Analytic View

Again, Attribute View might contain material id and name. Similarly, vendor id and name can
be another Attribute View.

So, we can safely say, Analytic view can be derived from a fact table along with single table
or joined tables and attribute views.

Analytic views are highly optimized for aggregating mass data.


Calculation View:
If you are ready to go an extra mile and do some complicated and/or simple
calculations/mathematics (usually custom), then Calculation View should be your area of
play. Say, you want to calculate the net value of the PO (4500.00$) after giving a 10.00
percent discount, then you need to do some mathematics (4500 (10/100*4500)) to get a
figure of 4050.00$.
Calculation views can be referred as combination of tables, attributes views, analytical views
and even other calculation views to deliver a complex business requirement. Calculative View
can logically link two or more Analytic Views.
Usually, when the modeling requirement cannot be met using attribute view and analytical
view, or when we need to use two or more analytical view and derive a resultant set,
Calculation view comes into picture.
Just like attribute view has a limitation of using non-numeric data, similarly, one analytical
view cannot consume another analytic view. When we have a complex need to use multiple
analytic view, then Calculation view is the only respite.

In simple terms, Calculation View are view with SQL Script inside (with the calculation logic).
It has Graphical & Script based editor.

SAP coined the Views very smartly using their literal meaning.

Attribute = Characteristic/Dimension/Trait => Master Data (Material/Customer/Vendor etc.


Does not change very often)
Analytic = Fact/Measure/Quantity/Numbers => Transaction Data (Sales Orders/Purchase
Orders/Delivery Quantity/Accounting Documents etc. Gets created and changed everyday.
The data set grows at a faster rate)
Calculation = Mathematics/Derived Numbers => Custom Calculated Data (Find the net price
after deducting the tax)

With the above fundamental, you might have some quality doubts and questions like below.

Q. Why should we link attribute view to transactional table (fact) to create Analytic view?
What is the advantage? Why not just join tables directly to transactional table?
A. Yes, technically, we can join tables directly to fact tables. But creating Attribute view has
more advantages and HANA adovcates modeling objects like attribute views instead of
tables.
Reasons:
i) Reusability: Attribute views are reusable building blocks and would be useful in future
developments. .
ii) Maintenance: Any change in the dimension or characteristic of the field in Attribute view
would flow downstream to all developed objects and models. We do not need to
change/update each and every development.
iii) Coherence : If we always use attribute view instead of adding the base tables, we can
ensure that all our developments are coherent.
iv) Analysis: HANA does not have where-used search to find the tables. But we can do the
where-used of HANA modeling objects. So, it is easier to find the views and then figure out
the tables when we want to do some analysis or investigating something.
Q. Can there be calculative field/column in Analytical View or Attribute View?
A. Yes, we can. But any Attribute/Analytic View containing a calculation attribute would
automatically become a Calculation View.
Q. Attribute Views do not store data, then how does it display the output?
A. When Attribute Views are called for output, the Join Engine takes care of processing the
data and providing the output.
Q. Which engine is responsible for Analytic View?
A. OLAP (Online Analytical Processing) engine processes the Analytical View.
Q. How does Calculation View work?
A. Once the Calculation view is successfully generated, a column view is generated in
_SYS_BIC Schema. This column view is available to HANA reporting tools.
Q. Is Calculation View directly available for reporting?
A. No. But, Calculation view can be made available for reporting, by enabling
MultiDimensional Reporting under the Semantics section. Once it is enabled, the execution
occurs using CE (Calculation Engine) functions in the Index Server at the database level.
Q. Between Calculation View and Analytic View, which view has better performance?
A. Analytic View. Calculation View is executed in CE (Calculation Engine) while Analytic View
in OLAP. Calculation View is not as fast as an Analytical View.
Please note:
Analytic views with calculated attributes and Calculation views both run in Calculation
Engine.
Analytic Views (without derived columns or calculated columns) use the OLAP Engine
Q. Views analogy with SAP BW terminology
A. Attribute View is like BW Dimension which can be reused throughout the system and not
just one model.
Info Cubes or Info Sets in SAP BW are the closest cousins to Analytic View.
Q. Analytic view can have attribute view. So will the join engine of attribute would trigger
or OLAP of analytic view?
A. During activation of the analytic views, the joins in the attribute views get flattened and
included in the analytic view run time object. Only the OLAP engine will be used thereafter.
The famous Engine diagram to conclude this post.

SAP HANA, short for High Performance Analytic Appliance is an in-memory, column-
oriented, relational database management system developed and marketed by SAP SE.
HANAs architecture is designed to handle both high transaction rates and complex query
processing on the same platform.
Multicore architecture for CPUs and 64-bit address space innovation in hardware enabled SAP
to design SAP HANA. Similarly, columnar data storage, improved data compression algorithms
and insert only approach innovation in software have contributed to the SAP HANA evolution.
Combinations of Online Transactional Processing (OLTP) and Online Analytical Processing
(OLAP) using the same database instance application approaches are supported by SAP HANA.

Enhancements in SAP NetWeaver 7.4:


Release 7.4 offers state-of-the-art support for application development optimized for SAP
HANA, therefore SAP HANA favors Release 7.4. Open SQL is a DB abstraction layer that defines
a common semantics for all SAP-supported databases. Usage of CDS views in the FROM clause
of query statements and Character-like literals in the SELECT list of query statements are
provided as recent Open SQL enhancements in SAP NetWeaver Application Server ABAP 7.4.
CASE statements in the SELECT list of query statements and COALESCE functions in the
SELECT list of query statements are new kind of conditional expressions featured by the
recent Open SQL enhancements.

Good thing is all databases certified by SAP support the recent Open SQL
enhancements (eg CDS). So if we really would like or really would have to consume native
HANA artifacts or features that are not accessible with Open SQL or with CDS, we have to
dig deeper and you have to consume them natively. The other good thing is if you need
to make change in existing Open SQL statements to new Open SQL syntax and there are
other Open SQL statements in your object, you need not change all. They are still valid.

SAP NetWeaver 7.4, generally available since May 2013, is the version that is currently fully
optimized for SAP HANA. It also facilitates the development of modern browser-based and
mobile applications because of its integrated UI development toolkit for HTML5 (SAPs
adaptation of the HTML5 standard, known as SAPUI5) and SAP NetWeaver Gateway
capabilities.

At the core of SAP HANA is the high-performance, in-memory SAP HANA database. It can
manage structured and unstructured data, and supports both transactional and analytical use
cases. As a traditional relational database, the SAP HANA database can function either as the
data provider for classic transactional applications (OLTP) and/or as a data source for
analytical requests (OLAP).

SAP HANA provides standard database interfaces such as JDBC and ODBC and supports
standard SQL with SAP HANA-specific extensions. In short, SAP HANA is an RDBMS offering
SQL interface and transactional isolation.

The latest release of SAP NetWeaver is optimized for SAP HANA and adds new capabilities to
the developers workbench for cloud, mobile and social networking.

Columnar Data Storage:


A database table is conceptually a two-dimensional data structure organized in rows and
columns. Computer memory, in contrast, is organized as a linear structure. A table can be
represented in row-order or column-order. A row-oriented organization stores a table as a
sequence of records. Conversely, in column storage the entries of a column are stored in
contiguous memory locations. Only the relevant column is fetched for the query, thus
reducing the amount of data to be processed.
With columnar data, operations on single columns, such as searching or aggregations, can be
implemented as loops over an array stored in contiguous memory locations. Such an operation
has high spatial locality and can efficiently be executed in the CPU cache. With row-oriented
storage, the same operation would be much slower because data of the same column is
distributed across memory and the CPU is slowed down by cache misses.
SAP HANA supports both, but is particularly optimized for column-order storage.

Columnar data storage allows highly efficient compression. If a column is sorted, often there
are repeated adjacent values. SAP HANA employs highly efficient compression methods, such
as run-length encoding, cluster coding and dictionary coding. With dictionary encoding,
columns are stored as sequences of bit-coded integers. That means that a check for equality
can be executed on the integers; for example, during scans or join operations. This is much
faster than comparing, for example, string values.

Columnar storage, in many cases, eliminates the need for additional index structures. Storing
data in columns is functionally similar to having a built-in index for each column. The column
scanning speed of the in-memory column store and the compression mechanisms especially
dictionary compression allow read operations with very high performance. In many cases, it
is not required to have additional indexes. Eliminating additional indexes reduces complexity
and eliminates the effort of defining and maintaining metadata.

SAP HANA dictionary compression is realized for column store tables by sorting dictionary with
the distinct values of the column and each column uses an array of integer values that
represent the positions of the actual values in the dictionary. When the table contains a huge
amount of data that is frequently searched or aggregated and when the table contains many
columns and typical queries access only a few of them then column store for table is
recommended.

SAP HANA Transport Container (HTC):


With SAP NetWeaver 7.4, applications containing ABAP and HANA development entities can
now be easily developed, updated, corrected and enhanced. As usually done for reasons of
quality assurance, the different ABAP and HANA development entities have to be transported
through the system landscape; typically from the development system to the testing, quality
system, and then to the productive system. Here comes the SAP HANA Transport Container
(HTC) into picture.
HTC is an ABAP development object which is required to integrate HANA repository content
into the standard Change and Transport System (CTS). As of AS ABAP 7.4, HTC is seamlessly
integrated into the Transport Organizer of AS ABAP and so integrating the HANA repository
content into CTS. It ensures an efficient delivery process of applications built out of ABAP
(say a method) and HANA content (say AMDP) or simply, ABAP for SAP HANA
applications between SAP systems by means of the proven ABAP transport mechanism.
ABAP in Eclipse:
The ABAP Development Tools for SAP NetWeaver tie in perfectly with SAP HANA Studio and
SAPs in-memory technology by allowing highly productive application development on top of
SAP HANA. The ABAP Development Tools significantly increases ABAP developer productivity
through rich Eclipse user experience and flexibility, new capabilities for sophisticated source
code implementation, task-oriented and test-driven business development. The ABAP
Development Tools enables cross-platform application development by integrating ABAP and
non-ABAP development tools into one Eclipse-based IDE. Built-in extensibility of the IDE
through the established Eclipse plug-in technology enables you to benefit from the huge
Eclipse ecosystem, develop on open platform and integrate new custom ABAP and non-ABAP
tools. ABAP Project in ABAP Perspective in ADT for SAP NetWeaver serves as acentral
interface for communication between the Eclipse based development environment and the
ABAP back end system.
Open and Native SQL:
Open SQL allows us to access database tables declared in the ABAP Dictionary regardless of
the database platform that the R/3 System is using. Native SQL allows us to use database-
specific SQL statements in an ABAP program. This means that we can use database tables
that are not administered by the ABAP Dictionary, and therefore integrate data that is not
part of the R/3 System.

As a rule, an ABAP program containing database-specific SQL statements will not run under
different database systems. If your program will be used on more than one database
platform, only use Open SQL statements.

Please note, all ABAP custom code would not show drastic performance improvement
automatically. In order to take maximum advantage of SAP HANA, our custom code should be
in compliance with the enhanced SQL performance guidelines.
ABAP Test Cockpit (ATC):
ATC can be used to check the ABAP coding for potential functional regressions/issues and
correct them (if necessary) before migrating to SAP HANA. ADBC (ABAP Database
Connectivity) interface check in the Code Inspector should be done before migrating to SAP
HANA to avoid functional gaps. Runtime Check Monitor (SRTCM) can be used to get additional
runtime information for potential functional regressions check. PERFORMANCE_DB check can
be used to check performance optimization potential before migrating to HANA.
The SQL Monitor (SQLM) can be used to capture the SQL profile of the ABAP system. SQL
Monitor data can be exchanged between two systems by creating a snapshot of the SQL
Monitor data, exporting it to the file system, and then importing it to the target system. SQL
profile of the business processes in the production system can be captured because of the
fact that SQL Monitor introduces only a minimal performance overhead.
SQL Performance Tuning Worklist (SWLT) allows correlating the results of an ABAP source
code analysis with SQL runtime data. The report gives a list which might have potential
issues. We should start investigating the ones which are at the top. We need to set a goal as
to how much we want to optimize. Work on those set and check if the
corrections/optimizations meet our self-defined or business defined requirements.
SAP List Viewer with Integrated Data Access (ALV with IDA) is based on the principle to select
only the data to be displayed from the database and perform calculations, aggregations,
and grouping on the database layer.
Code to Database Paradigm Shift:
Code-to-Data paradigm helps to improve the performance of data intensive ABAP
coding because the in-memory capabilities of SAP HANA allow calculations to be performed
on the database layer, which helps to avoid unnecessary movement of data. We have to
consider the fact that SAP HANA and AS ABAP use different type systems when we follow the
Code-to-Data paradigm using Native SQL. Performance can also be increased by using the new
features in Open SQL, using AMDP and using view entities provided by the advanced view
definition capabilities.
Core Data Services (CDS) views and AMDP (ABAP Managed Database Procedures)
ABAP artifacts can be created and maintained via ABAP Development Tools for SAP Net
Weaver.
Core Data Services (CDS):
CDS is a mechanism to push down logic to database. We outsource this effective innovation
to put the code and execute the logic in the database itself. So, in simple words, CDS helps
us to run our logic in database.

Now let us look at the formal definition and explanations. Core Data Services, are a higher-
order SQL that relieves application developers from low-level SQL coding for adding
referential navigation by generating the required code automatically, and also forms the basis
for unified data models in the SAP HANA context. The intention is for SAP HANA to be able to
consume various data sources on the same semantic level regardless of whether they are
delivered by an ABAP program or SAP Business Objects model. Core data services are included
as part of SAP HANA extended application services, an application server that is shipped with
SAP Net Weaver 7.4 SPS 05 and SAP HANA as of SPS 06.

CDS is a collection of domain-specific languages and services for definition and consumption
of semantically rich data models. Data Definition Language, Data Control Language and Data
Manipulation Language are included in CDS. Core Data Services (CDS) view can be consumed
by using the Data Preview in ABAP Development Tools for SAP NetWeaver and using it as a
data source in the FROM clause of an Open SQL query. Conditional expressions like COALESCE
functions and CASE statement in the projection list can be used in Core Data Services (CDS)
views. A field from the projection list of another CDS view used in the FROM clause; String
constants and literal values and Aggregation functions over fields of ABAP Dictionary tables
used in the FROM clause can be included in the projection list of a Core Data Services (CDS)
view. We can use the static method use_features of class cl_abap_dbfeatures to check if Core
Data Services (CDS) views with scalar input parameters can be used in Open SQL queries in
the system. CDS now supports LEFT OUTER JOIN and RIGHT OUTER JOIN. The target entity of
the association is mandatory in the definition of an association in a Core Data Services (CDS)
view.

Map to Data source option in SAP NetWeaver Gateway Service Builder (transaction SEGW)
can be used to implement the consumption of a Core Data Services view. The DDL source
(DDLS) in which the CDS view is defined is included in a transport request when we transport
a Core Data Services (CDS) view. CDS views can also be extended using EXTEND VIEW
statement programmatically.

The enhancements included in CDS are:


Associations on a conceptual level, replacing joins with simple path expressions in queries
Annotations to enrich the data models with additional (domain specific) metadata
Expressions used for calculations and queries in the data model

In laymans words, annotations and associations are extensions of CDS to SQL.


Associations in a Core Data Services (CDS) view can be consumed in the FROM clause; in the
WHERE and HAVING clauses and in the projection list. The main purpose of associations in
Core Data Services is to define relationships between entities. Dictionary tables, CDS views
and Dictionary views can be used as a data source in a Core Data Services (CDS) view. In simpl
words, ABAP Dictionary tables, CDS views, Dictionary views can be queried in the Open
SQL SELECT statement.
While consuming the Association in the CDS using a path expression, a JOIN is constituted in
the underlying database.
Benefits of replacing JOIN statement with Association in CDS view is that it can be consumed
using simple path expression and ON conditions for association are generated
automatically and association can also be exposed themselves.
Annotation AbapCatalog.sqlViewName is mandatory for the definition of a Core Data
Services (CDS) view. AbapCatalog.Buffering annotation has scope in the entire CDS view in a
Core Data Services (CDS). In other words, annotations are domain-specific metadata.
Symbol @ (at) is used to mask annotations in Core Data Services (CDS) views.
The main advantage of using Core Data Services (CDS) in SAP NetWeaver Application Server
ABAP is, we can define complex data models that can be consumed in a simple Open SQL
SELECT statement and we can use the extended view-definition functionality to push down
code to the database layer. It is important to note that scalar input parameters are database
dependent. So, if we plan to consume a CDS view using FROM clause of Open SQL query with
scalar input then we need to keep in mind that the query cannot be executed on all SAP-
certified databases for the database dependent scalar inputs.
ABAP Managed Database Procedure (AMDP):
ABAP Managed Database Procedure is a new feature in AS ABAP allowing developers to write
database procedures directly in ABAP. AMDP can be considered as a function stored and
executed in the database. The implementation language varies from one database system to
another. In SAP HANA it is SQL. Using AMDP allows developers to create and execute those
database procedures in the ABAP environment using ABAP methods and ABAP data types.

In direct words, AMDP is nothing else but a container of methods. And the procedure itself
is deployed and executed on the HANA database. ABAP Managed Database Procedure follows
the Top-Down approach. In order to implement an ABAP Managed Database Procedure method
We need to implement (mandatory) the class interface IF_AMDP_MARKER_HDB. ABAP
language elements LANGUAGE db_lang (db_lang = SQLSCRIPT), FOR db (db = HDB) and BY
DATABASE PROCEDURE are mandatory for the implementation of a method as an ABAP
Managed Database Procedure. AMDP can also improve the performance of data transformation
in the Extract, Transform and Load (ETL) process in BW. We can consume an ABAP Managed
Database Procedure in your ABAP coding by calling the corresponding ABAP class
method. Source code management using the ABAP transportation infrastructure, Static
syntax checks of database-specific coding and Database independency are the
advantages of using AMDP.

AMDP has some limitations.


Exporting, importing and changing parameters are allowed
Methods with returning parameters cannot be implemented as AMDPs
Method parameters have to be tables or scalar types
Method parameters have to be passed as values

ABAP Database Connectivity (ADBC):


ABAP Database Connectivity (ADBC) is an API for the Native SQL interface of the AS ABAP
that is based on ABAP Objects. The methods of ADBC make it possible to; send database
specific SQL commands to a database system and process the result; to establish and
administer database connections. While the statements of Native SQL offer exclusively static
access to the Native SQL interface, ADBC makes an object orientated and dynamic access
possible.
ADBC API uses CL_SQL_STATEMENT and CL_SQL_RESULT_SET classes. We need to bind a
reference to the internal table as an output parameter to the CL_SQL_RESULT_SET instance
and fetch the result using the next_package method of class CL_SQL_RESULT_SET to retrieve
the result set into an internal table after executing a Native SQL query statement to retrieve
a list of information using the corresponding ABAP Database Connectivity (ADBC) API method.

In traditional data warehouse, real time data analytics is not possible. It is limited by the
design consideration or lets say its not designed to be Real time.

SAP HANA is an in-memory database and application platform. What makes it different is
the design consideration, which was not in traditional databases. It uses or lets say it stores
data (all of it) in In-memory, making mixed used of Columnar and Row storage . The data
compression ratio could be as high as 1:10. It allows you to stay away from indexes,
materialization and aggregation, which now can be done on the fly.You can run
your Analytics and OLTP application on the same instancethere by reducing the system
landscape complexity and do adhoc analysis .

Traditional Way :

Limitation in this design :-

1. Redundant data at multiple places.


2. Reduced speed because of multiple data access from different sources and models.
3. No single source of truth.

With SAP HANA:

Benefits :-

1. Single source of truth.


2. No redundant data
3. Less complex landscape and ease of maintenance.
As shown in the above diagram, With SAP HANA, the key is pushing as much of the logic
execution into the database as possible. Keep all the data intensive logic down in the
database as SQL, SQLScript, and HANA Views.

So then the question arises what is S/4 HANA ?

S/4HANA is natively built on SAP HANA with simplified data model, without indices ,
aggregates and no redundancies. It leverages the multi tenancy functionality of SAP HANA. It
can be deployed on premise, cloud or mixed/hybrid . It is natively designed with SAP Fiori
offering an integrated user experience .

Frankly speaking, in essence S/4HANA is just a new name for the same good old ERP
functionality the difference however is in that ERP can now run on an in-memory database
(SAP HANA).
So now I have brief idea about SAP HANA and S/4 HANA , what is then SAP Simple Finance ?

SAP Simple Finance is an add-on to ERP, which means it is a standard product developed to
work on top of SAP ERP (ECC6.07 to be precise). The official name used to be SAP Simple
Finance add-on 2.0 for SAP Business Suite. The Finance add-on replaces the classic
Financials applications in your SAP ERP system. In contrast to accounting in SAP enhancement
package 7 for SAP ERP 6.0, Simple Finance comes with a simplified data model. Totals
tables and application index tables have been removed and replaced with HANA views with
the same technical names. A list can be found at the below table. This means the impact
needs to be assessed on your interfaces, your data warehouse, your authorization profiles
and potentially your monitoring tools.

Smart Data Integrator & Data Load In SAP HANA

Techno
logy, especially the real time, are of great help if you are oceans apart and divided by time
zone. I had an interesting talk with my buddy and SAP HANA expert from New Zealand this
morning regarding data load in SAP HANA (Just to let you know, I am in Germany right
now). I felt it might be useful to share and document the discussion for our own
Backdrop of the call: With SP09 of SAP HANA, SAP has tried to make it easier and reduce the
confusion over the topic of Data Load in SAP HANA. With SAP HANA Smart Data Integration
Feature, you get one in all package(+ difference combinations ) of all the loading tool for
loading data into Single HANA Instance. The intension was to reduce the confusion over
whether to use SAP Landscape Transformation (SLT), Data Services (DS), Smart Data
Access (SDA) or get a single UI that support all the available tool.
There are justified and valid use cases where Single tool makes more sense and should be
preferred (we are not going to discusses that here, as this is more of conversation that I and
George had this morning)
Extract from our discussion. In conversation with SAP HANA Expert and Solution Architect. I
have removed the bullshit bingos that we talked:

George I think we no longer need SLT to replicate Data in Real-time in SAP HANA Side Car,
as we have Smart Data Integrator (SDI).
Vinay It depends J ( I learned the word from my Corporate Finance Professor). You will not
need the advance features of Smart Data Integrator, if you are just doing 1:1 replication
from Source to SAP HANA. Or lets put it straight it would not be worth.
G Can you re-iterate? What is Smart Data Integrator and SAP SLT in brief? I am confused
now.
V Let me tell you the major difference, that comes to my mind immediately, which I think
might help you to differentiate:

G But, in my project my replication needs are mixed, I need to have support for
Real time as well as batch
Transform the data (for batch and some time for real time )
We are also having mixed scenarios cloud solution on-premise solution
I was looking forward for one UI that supports all, one connectivity that supports all my
above needs. Should I not move to SDI completely and lay off my SLT and Data Services ?.
V Again, it depends.
(disclaimer below answer is comprehension of what I read in one of the discussion at SCN,
earlier )

In a simple glance Data Services and the Smart Data Integration features sound similar things.
But we will have to see the use cases.

Do you have all the instances/application running on SAP HANA? HANA solution requires
HANA, so Data Services is still required for all use cases that are not using HANA or where you
are using HANA only as database.
With SAP HANA the major focus is real-time. But with Data Services you can do Near real-
time and batch processing. The HANA solution does support batch as well for the initial
load, the primary focus was the question When the source sends a change row, how to
transform this row according to the initial load rules so the target data is correct. In Data
Services you have to build an initial load, you have to find out how to identify changes and
build data flows that handle the various changes. The intention of the HANA solution is to do
as much as possible automatically.
Because of the real time focus, the HANA solution does transactional consistent loading. One
of the design differences with Data Services.
And the final difference, HANA requires access to remote data in various ways, not only
batch and real time. It should support Smart data Access, Calc views etc.
So only if you use Data Services exclusively to load into HANA, only then you should consider
the HANA SDI option. As we have overlap only in this scenario.
G One of our sister concerns are moving to SAP HANA SPS10 for their reporting needs
(transactional as well). They will have HANA as Platform, so should we recommend to get
rid of DS?
V For such scenarios, Yes. But as many experts say, get started on the HANA to gain
experience. If you have Data Services already, do not exchange the technology right away.
Wait for SAP to give us amigration path from the DS engine to the HANA engine.
First Program in ABAP HANA

Introduction to SAP ABAP on HANA


We have heard enough about SAP HANA, in-memory concept, software/hardware
innovation etc. In this article, we do not want to beat around the theories. As an ABAP
developer, we would like to know how we can view ABAP objects (programs/FMs/tables) in
HANA Studio and how we can create/change those objects there. This is the first part in the
series of posts which would specifically target our SAP Technical folks, our ABAPer
community.
As an ABAPer, I had these queries. I am sure, many of my ABAPer friends would have similar if
not same questions. Hope these answers would provide some light to your existing knowledge
about ABAP on SAP HANA. At the end of the last question of this article, you will learn and
create your first ABAP program from HANA Studio and also execute it successfully to view
the output.
Questions:
1. What is HANA Studio and what is the need of HANA Studio?

2. What is ADT and what is the need of ADT?

3. Seems ADT does the same thing like SE80 T-code. Then why do we really need ADT?

4. What are perspective in HANA Studio?

5. In which perspective can we create/change/display ABAP programs?

6. How do we view ABAP programs in HANA Studio?

7. Can we edit the same program in GUI and in HANA Studio?


8. How can we write ABAP programs using ADT?

The below explanations are as per our understanding. We would like to appeal our
experienced ABAPers to throw some more light to the below answers if they know more
about it.
Q: What is HANA Studio and what is the need of HANA Studio?
1. HANA as a database has evolved manifold in the last few years. In order to keep pace with
these hardware and software innovations, HANA Studio is introduced. HANA Studio provides
the right environment for HANA administration, modeling and data provisioning.
Studio is needed so that the developers can create models, procedures etc using
the Eclipse-based tool in HANA. Studio is also utilized to develop SQL Script which writes
Application logic that would push down data-intensive queries and logic to HANA database
and improves the overall performance of the system.
Studio also provides monitoring and other tracing facilities.
Also, the Studio gives freshness to developers who were bored with the blue GUI screen
editor. (on a lighter note)

Q: What is ADT and what is the need of ADT?


2. ABAP Development Tool is the full form of ADT. ADT provides eclipse base ABAP
Integrated Development Environment (IDE).

ADT does not come by default. It has to be installed as a plugin on Eclipse (in Studio -> Help
-> Add New Software).

We need ADT because with ADT in Eclipse, the HANA Studio becomes super powerful. You
can connect to different ABAP systems from single Eclipse User Interface. Isnt it cool?
With ABAP perspective in the studio, you can implement end-to-end in-memory solutions in
Studio with the same UI.

One entry point and multiple benefits. Dont you like it?

Q: Seems ADT does the same thing like SE80 T-code. Then why do we really need ADT?
3. You are right. Both ADT and SE80 have same source code repository and locking
mechanism and thus both complement each other. But ADT is more powerful than SE80.
Some advanced features like creating external views for exposing HANA view to ABAP DDIC*
(external views), creating Database proxy procedures* are available only when using ADT.
SE80 has been with SAP from birth. ADT is new and still has some enhanced features. SAP is
continuously working on more exclusive features which would be possible only from ADT in
future. So, ADT is the future
Q: What are perspective in HANA Studio?
4. In laymans term, perspectives are predefined layouts for different roles. For example,
we have ABAP perspective for ABAP developers. Java perspective for Java developers. Debug
perspective for debugging. Modeler for modeling in SAP HANA. Administration console for
Admin tasks etc. So, every member of the team would use the perspective as per his job
role, responsibilities and activities they need to perform.
Q: In which perspective can we create/change/display ABAP programs?
5. You guessed it right!! ABAP perspective. So obvious, right?

Bonus question . Can we directly write and execute ABAP program in HANA

studio?
No, it has to be connected to an ABAP system first. So, what helps Eclipse to connect to ABAP
system? The answer is ABAP Project. Did you expect this answer? ABAP project

helps to connect the Eclipse base IDE to ABAP backend system. The project provides
eclipse based frameworks for creating, processing and testing development objects.

In short, ABAP project represents a system logon and contains all ABAP development objects
of the related system.
Check this image below. Project S4H_800_SIMPLE3_SAPYARD is our project which is
connected to our S4H system.

Similarly, we can have multiple projects pointed to multiple systems from one HANA
Studio UI.
Q: How do we view ABAP programs in HANA Studio?
6. Check there is an ABAP program YSAPYard in ABAP system (Left side). We can see the same
program from our Project which is connected to the same ABAP system. Expand the System
Libraryand go to your custom package and program.
Check, we can view the same program in GUI and HANA Studio.
Q: Can we edit the same program in GUI and in HANA Studio simultaneously?
7. No, we cannot edit the same program simultaneously. Both ADT and SE80 have same
source code repository and locking mechanism (as mentioned in answer 3), therefore, we
cannot interfere when other is editing it at the same time.

You get the below error in ADT if you try to edit the already opened program (in GUI).

Finally, the much-awaited question by the ABAPer..

Q. How can we write ABAP programs using ADT and execute it?
8. Select the Package where you want to save your program. Right-click on it and select
ABAP Program.
Give the name and description of the program. Do not forget, the Z* or Y* naming convention
holds good even while creating custom objects from ADT.

You need to choose the transport where you want to save your program.

Write your program and check the syntax and activate it. Most of the icon are similar to GUI.
Done, your program is ready in Studio. Actually, you created the program just like in
SE38/SE80, just the front end was different. You can go to your ABAP system and check, the
new program exists there.

Execute the RUN icon in HANA studio and your program would show the output.

Congrats, you created your first program in SAP HANA and executed it successfully. Although
this was a dummy program, in actual projects as well, the process remains the same.
In subsequent posts, we would build real-time programs, learn about ABAP trace in SAP
HANA Studio, Debugging in ADT, Optimized access on internal tables, Code Inspector, SQL
Monitoring, ADBC, AMDP, CDS etc.
ADT Eclipse and HANA Studio

ADT Eclipse/HANA Studio for ABAPers


In the SAP ABAP on HANA Part I, we talked about some common questions and answers. We
also created our first program in HANA Studio. In this article, we would get accustomed with
the HANA Studio screen, various buttons and icons. How/Why to use them and also we would
try to correlate the functionalities of HANA screen icons to that of classic GUI icons.
HANA Studio Toolbar
Most of the icons are self-explanatory.
Open ABAP Development Object: The system-wide search for development objects is
possible.
Search: The workspace-wide search for development objects is possible. With this function,
we can search for ABAP development objects across all usable ABAP projects.
A. What is NOT there in ADT Eclipse/HANA Studio, which was available in ABAP GUI editor?
1. Change/Display Icon
We were not able to figure out the change/display toggle icon in HANA Studio ABAP editor.
Whenever we opened the program in our development system, it opened in change mode. If
any reader knows about the change/display icon (or shortcuts) in eclipse, please do mention
it in the comment section or email us and educate all.
2. Pattern Icon
Another significant button which we could not figure out in HANA Studio is the Pattern Icon.
When we want to auto generate the FM/Class/Method or any custom pattern, we are so
habituated to use this Pattern icon in ABAP editor. We were little surprised, not to find this
commonly used button. But you need not be disappointed. Type initial letters of the syntax
you want to use and then use Ctrl + Space and Shift + Enter to insert the full signature (e.g.
for function module / method selected).
3. Pretty Printer
There is no pretty printer icon. How would developers impress their team leads and quality
reviewers without the pretty printer? Do not worry, the pretty printer button might

not be there, but the functionality still exists. Go to Windows -> Preference -> ABAP
Development -> Source Code Editor -> Formatter to set up the formatting needs.

You might not see the Formatter option upfront. You need to click on Source Code Editors.
Then you would see settings for different options (number 5 in above image)
and Formatter is one of them.
Once you set the format, Shift + F1 is the shortcut for the desired formatting.
So, Pretty Printer in GUI = Shift + F1 in HANA Studio

These are some commonly used icons/buttons which are missing in Eclipse ADT. Please note,
this is not the whole elaborate list.
B. Check the common/similar or near similar features in Vanilla ABAP editor and
Eclipse/HANA Studio ABAP editor
1. Outline View
Let us start with the Outline View in HANA Studio. Check the outline view on the lower left
corner of the studio.

The Outline view displays the internal structure of a program or class that is currently
open in the ABAP source code editor. The Outline view is synonyms to the Object detailed
screen of ABAP editor in SE80. Just like when we click any element on SE80, it takes to that
element in the main program, similarly, the outline is synchronized with the contents of the
editor. Hence, when an element in the Outline view is selected, we can navigate quickly to
the corresponding position in the ABAP source code.

Just like in SE80 editor, for each element in Outline View in Studio, we can navigate to the
declaration part in the source code editor or the implementation part (e.g. in the case of
methods of a class).

2. Keyword Completion/Suggestion
Just like in GUI, ABAP editor on HANA studio suggests keywords as you type the syntax. The
GUI shortcut Ctrl +Space holds good in eclipse too.
3. Where-Used List
This powerful feature is still available.

4. Revision History.
Like in GUI ABAP editor, we can compare changes from one transport of source code to
another in ADT. Right click on the source code area of the program and choose Compare
with -> Revision History.
4. Transport Organizer
Transport Organizer in ADT for Eclipse enables ABAP developers to perform the below
Transport related operation through Studio.
i) Adding user to Transport request (TRs)
ii) Changing owner of TRs and tasks.
iii) Checking consistency
iv) Releasing and deleting TRs
Right-click on the transport to see all the activities you can perform. One example of adding a
user under an existing transport is shown below.

Limitation of Transport Organizer in HANA Studio:


Transport request CANNOT be created within the Transport Organizer view of ABAP
Development Tools.But, if you create/edit an object (say table/program/FM/package etc) in
Studio ADT and it asks for a new transport, you can create new transport within that
corresponding wizard. Remember, transports can be created from those wizards and not
stand alone from Transport Organizer.

Advantage of Transport Organizer in HANA Studio:


The Search option in the Transport Organizer. Any object (table/program/FM/package etc)
can be searched for, to check the TR and task it belongs to. In SAP GUI transaction SE10, we
cannot search that easily. Although we have other t-codes and ways to figure it out. But the
ease and user experience in Transport Organizer in HANA Studio is unmatched.

C. The new features in Eclipse/HANA Studio ABAP editor, which were not available in GUI
editor.
1. Syntax Error Marker:
Check the red cross on the left side of the code editor. This feature warns you of any error
lines while you are typing your program and even before you hit the syntax checker. This
comes really handy for the ABAPers to type the right syntax as and when he/she is.

2. Local code comparison:


Compares current saved version with the selected saved version.
Right click on the code area of the program and choose Compare with -> Local History.
Choose one of the previously saved version and see the comparison. This local change history
gives the comparison between saves in the ADT, showing changes in the code as stored in the
local workspace.

Check, the code difference can be so easily identified.


3. Rename elements/texts.
Although we have Find and Replace (or Ctrl + H) option in SAP GUI, but Eclipsed based ADT
has better renaming experience. Just right click on the source editor and select Rename or
hit Alt+Shift+R, to open the replace wizard.

Select the element you want to replace and hit Rename (Alt+Shift+R), give the new name for
the element and hit Next.
Before it finishes, it would show the Original code and the new code after the change. It
would also show all the lines which would be changed.

Hit Finish and the element is renamed throughout the entire source code.

Summary of some prominent misses and inclusions

Debugging in ADT
ABAP Debugging using ADT (Eclipse/HANA Studio)
We made ourselves comfortable with the HANA Studio screen, icons and buttons. In this
article, we will get exposed to Breakpoints and Debugging in ADT. If you have been working as
an ABAPer for some time, it would not take much time for you to get familiar with
the Debugger in ADT (Eclipse/HANA Studio). It the same wine in new bottle. ABAP

debugger is completely integrated with Eclipse from Kernel 7.21, SAP Basis 7.31 SP4.
All the standard debugging features which were earlier available in GUI editor are also
available in eclipse. Such as:
i) Set breakpoints
ii) Step through the code
iii) Display and change variable values
iv) View internal tables
v) Monitor the call stacks
Salient Properties of ADT breakpoints:
The breakpoints in ADT are User External breakpoints, so:
i) They are valid in your ABAP project
ii) Programs running under your ABAP user
iii) On all App servers in the backend system
Two types of breakpoints in ADT:
1. Static Breakpoint
Static breakpoints are set at a particular line of the code. A static breakpoint stays with the
line of code at which you set it. If you delete code lines above the breakpoint, it slides along
with the relocated code at that particular line.
2. Dynamic Breakpoint
Dynamic breakpoints are determined at run time. They are triggered when the running
program reaches a particular ABAP statement e.g. loop, perform, select, calls, submits etc.

Please note: Dynamic breakpoints take effect for all programs that run under your user. You
need to be careful to remove the dynamic breakpoint once you have finished your analysis. Or
else, it would stop for any application where the dynamic breakpoint condition is found. And
we are sure, you do not want speed breakers in a highway. We can always limit the

scope of dynamic breakpoints to the scope of the debugger.


Advantage of ADT debugger:
One feature of the debugger in ADT is that you can work with the source code in debug mode
as you work in ABAP perspective. That means, when you see a bug in the code during
debugging, you can correct your code in the same editor in the same screen. Unlike
traditional GUI debugger, where you need to go to SE38/SE37/SE80 etc in a separate session
to change the code.

[ad1ToAppearHere]

Hands On Section:
Enough of preaching!! Well, above are the theories and I am sure you would be more
interested in looking at the actual screens. Let us have a quick look at the Debugger screen
and substantiate our understanding.
1. Check the icons/buttons which you can see during debugging:

All the buttons are self-explanatory.

Resume button : Run to the next breakpoint or to the end of the program.
Terminate button : Abort the execution of the program in the debugger. Program execution
ends.
Disconnect button : Run to the end of the program, ignoring any intervening breakpoints.
Step Into (F5) button : Execute the next single ABAP instruction in the program in the
debugger.
Step Over (F6) button : Execute the next ABAP statement. If the next step is a procedure call,
run the entire procedure.
Step Return (F7) button : Run until the current procedure returns to its caller or until the
program ends.
Run to Line (Shift F8) button : Run to the statement on which the cursor is positioned.
Breakpoints in between will be respected or not is set in Windows->Preferences->ABAP
Development->Debug.
2. Put Static Breakpoint
Double click on the area shown below or right click and choose Toggle Breakpoint or
press Ctrl + Shift + B.

3. Execute the program


You would get this pop-up. Select OK and continue. The debugger stops at the breakpoint.

4. Check the Variables view, Debugger editor, Breakpoints view, Debug perspective etc
You can change the values of variables at the run time as you used to do in ABAP GUI
debugger. You can also move the cursor over the variable to display its value.
5. Check the ABAP Internal Table (Debugger) view
Double click on the internal table name and see the values in the internal table view.

You can also right click on the internal table name and choose Open Data Preview to see the
values of the internal table.
6. Lets set a Dynamic Breakpoint

Go to the Breakpoints View and Add dynamic breakpoints at the statements you need. Type
the statment in the search area and get your dynamic statements.

See two examples of dynamic breakpoint below.


7. Manage the Breakpoint Properties of a particular breakpoint
Manage breakpoints using Breakpoints View. Right click on the breakpoint and choose the
Breakpoint Properties and choose the restriction you want.

8. Manage the Debug Properties for the user/session


You can change the user for which external breakpoints are effective. Breakpoints cab also
be effective for the entire project independent of the users.
Core Data Services

Let us start our encounter with Core Data Services (CDS) View with questions and answers.
Before we explain What is CDS View, let us ask, Why CDS View?
Question: Why do we really need CDS Views?
Answer: According to SAP, CDS Brings Conceptual and Implementation Level Closer Together.
What does this mean?
Say our requirement is to get the id, name and the respective zip code of the home address
for all employees in org_unit 4711.
In order to meet this requirement, SQL developers write below SQL.

The
issue with the above SQL: Large Semantic Gap between Requirement and SQL Code.
If you are not an experienced SQL developer, you would find it complex/difficult to
understand the meaning/semantic of the SQL. Therefore SAP wanted something simpler and
better. This is one motivation for CDS.
Being an ABAPer you find the above SQL complex and you decide to write your own Open SQL
in ABAP.

Issue with the above Open SQL: SQL Complexity Leads to Imperative Code (codes which are
like instructions/statements which change its state. Imperative programming focuses on
describing how a program operates.)
There are performance concerns in the above Open SQL. Loops in loops, nested queries with
many round trips is not advisable. This is another motivation for CDS.

Now, let us see how CDS would do the same task.

Same requirement: Get the id, name and the respective zip code of the home address for all
employees in org_unit 4711.

With CDS, SQL developers see small or no semantic gap and ABAPers do not need any
coding. You get the result directly from the CDS. Isnt this motivation enough?

[ad1ToAppearHere]

Question: We already have Database Views in ABAP (SE11), then why do we still need
CDS views? Or, What are the advantages of using CDS views?
Answer: CDS is much more powerful than what it appears. The CDS concept is far more than
simple view building but describes a DDL for building a meta-model repository involving
database tables, database views, functions, and data types.
CDS was invented by SAP, because the modeling capabilities of the ABAP Dictionary and
of the SAP HANA Studio are not sufficient for the needs of all fully blown business
applications with modern needs.

With HANA CDS, CDS is available for SAP HANA in the SAP HANA studio. With ABAP CDS,
the CDS concept is also made available for the AS ABAP, where the features of CDS surpass
the modeling capabilities of SE11. ABAP CDS is open and not restricted to SAP HANA (i.e.
database independent).
If we need meta-models for our application, that can be built with CDS, then we need CDS
views.

Question: OK, we read above that CDS was invented to facilitate needs which ABAP Dictionary
and HANA Studio could not meet. So, what are the types of CDS Views?
Answer: There are two types of CDS Views.
1. ABAP CDS
2. HANA CDS
Check the details in CDS One Concept, Two Flavors
Also, CDS Views can be categorized as of two types:
1) CDS Views without Parameters
2) CDS Views with Parameters
Question: Why was CDS introduced? (same question in a different way)
Answer: With CDS, data models are defined and consumed on the database rather than on
the server. CDS also offers capabilities beyond the traditional data modeling tools, including
support for conceptual modeling and relationship definitions, built-in functions, and
extensions. Originally, CDS was available only in the design-time and runtime environment of
SAP HANA. Now, the CDS concept is also fully implemented in SAP NetWeaver AS ABAP,
enabling developers to work in the ABAP layer with ABAP development tools while the code
execution is pushed down to the database.
Question: Finally, What is Core Data Services?
Answer: CDS is an infrastructure layer for defining semantically rich data models, which are
represented as CDS views. In a very basic way, CDS allows developers to define entity
types (such as orders, business partners, or products) and the semantic relationships
between them, which correspond to foreign key relationships in traditional entity-
relationship (ER) models. CDS is defined using an SQL-based data definition language
(DDL) that is based on standard SQL with some additional concepts, such as associations,
which define the relationships between CDS views and annotations, which direct the domain-
specific use of CDS artifacts. Another example is expressions, which can be used in scenarios
in which certain CDS attributes are considered as measures to be aggregated.
Similar to the role of the DDIC in the traditional ABAP world, data models based on CDS serve
as central definitions that can be used in many different domains, such as transactional and
analytical applications, to interact with data in the database in a unified way . However, CDS
data models go beyond the capabilities of the DDIC, which were typically limited to a
transactional scope (think of traditional online transaction processing functionality). For
example, in CDS, you can define views that aggregate and analyze data in a layered fashion,
starting with basic views and then adding powerful views that combine the basic views.
Another difference is the support for special operators such as UNION, which enables the
combination of multiple select statements to return only one result set.

CDS artifacts are stored in the DDIC and can be accessed in ABAP programs via Open SQL in
the same manner as ordinary ABAP tables or views.

In simple words:
Core data services are a new infrastructure for defining and consuming semantically rich data
model in SAP HANA. Using a data definition language (DDL), a query language (QL), and an
expression language (EL), CDS is envisioned to encompass write operations, transaction
semantics, constraints, and more .
We can use the CDS specification to create a CDS document which defines the
following artifacts and elements:

Entities (tables)
Views
User-defined data types (including structured types)
Contexts
Associations
Annotations

Question: When do we need CDS Views?


Answer: It depends on reusability. If the functionality of a view is only needed once, then no
need to create CDS Views. We can use Joins, SQL expressions, subqueries etc in Open SQL for
this code push down. But if we want to reuse a view, need semantical or technical
capabilities of CDS that exceed those of Open SQL (but we try to keep the technical
capabilities on the same level, e.g., CDS knows UNION, Open SQL will know UNION with an
upcoming release) or we just want to push down the full data model to the database, we
need CDS.
Question: What is the fundamental difference between HANA CDS and ABAP CDS?
Answer: The subtle differences between CDS in native SAP HANA and CDS in ABAP lies in the
view definition. In both the ABAP and HANA scenarios, views are created on top of existing
database tables that are contained in the DDIC. With CDS in native SAP HANA, we must
create the basic entity types that correspond to the DDIC tables as part of the CDS view
definition. With CDS in ABAP, we can refer to any underlying DDIC table, view, or type from
within the CDS view definition, avoiding the need to duplicate the DDIC table definitions
on the CDS layer. In the ABAP scenario, the CDS definitions are considered DDIC artifacts and
need to be activated like any other DDIC artifact and when changes are made, their impact is
propagated to dependent artifacts.
Question: What is preferred ABAP CDS or HANA CDS if the client is in ABAP on HANA DB?
Answer: If you use ABAP on HANA DB, you can work directly on the DB and also use HANA CDS
there. But then the CDS objects created are not managed by the ABAP Dictionary meaning
you cannot access them directly with Open SQL and they are not TYPEs in the ABAP TYPE
system.
Question: When should we use ABAP CDS and when should we use HANA CDS?
Answer: If you run SAP HANA standalone or in a side-by-side scenario (there is no ABAP stack
on top) you cannot use ABAP CDS. You must use HANA CDS.
If you have an ABAP stack on top of a HANA database (an AS ABAP uses the HANA database as
central database) then:

i) If you want to access the CDS entities in ABAP as data types or in Open SQL or if you want
to evaluate the CDS annotations in ABAP, you must use ABAP CDS.

ii) If you do not want to access the CDS entities in ABAP, but you want to transport and
upgrade them like ABAP repository objects, you can use ABAP CDS.

iii) If you do not want to access the CDS entities in ABAP as data TYPEs or in Open SQL, you
can use HANA CDS, which is better integrated into SAP HANA. An access from ABAP is then
possible using Native SQL (ADBC, AMDP) only.

Question: Can we consume ABAP CDS natively in HANA?


Answer: Yes we can. For each CDS view a database view (SQL view) is created in the
database during activation. We can access that database view natively if we want to. CDS
table functions are managed by AMDP. The respective database functions can also be
accessed natively.
Question: Is it also possible to access the database views (generated by having a
corresponding ABAP CDS view) in HANA natively and simultaneously consider the
authorization logic defined in the corresponding DCL?
Answer: Yes. Open SQL checks the authorization implicitly but is of course translated into
native SQL code doing that on DB level (implicit conditions). Same for the SADL framework
that checks the authorizations itself natively. The problem is that you need to have access to
the internal role representation which is not published and subject to change or you have to
build a framework yourself that parses the role definition and creates the corresponding
conditions.
Question: How can we find all CDS views in SAP?
Answer: Check the table TADIR in SE16; PGMID = R3TR, OBJECT = DDLS; here we find all
DDL sources and the package of each source in column DEVCLASS. Knowing the package, we
can use ADT (ABAP Development Tool in HANA Studio) to find the DDL sources in ADT.
Examine table DDLDEPENDENCY in SE16; it contains the names of all DDL sources and the
names of the CDS entities (value STOB in column OBJECTTYPE) defined therein as well as the
names of the generated database views (value VIEW in column OBJECTTYPE); (one row for
each -> two rows for each DDL source). => Selecting VIEW for OBJECTTYPE gives you all CDS
database views.

Now let us try to open the DDL source of the CDS in SE11.

Check it would prompt us to go to ADT Tools to view it.

Now, let us open the DDL SQL View of the CDS. Note the warning below which says DDL SQL
views are only supported in a limited way by SE11.
Having one name is just not good enough in CDS; we need two names.

One name is for the SQL view that is going to be created in the dictionary (the one we will be
able to look at in SE11), and the other name we have is a name for the CDS view entity,
which is viewed and changed via Eclipse.

PS: We could name both the SQL view and the CDS view the same, but we should not as they
are different things, so the name should reflect the difference.

SQL view is visible in SE11, however, we cannot edit it in SE11.

CDS View entity is the one we should refer to in SELECT statements in our ABAP programs.
Although we can use DDL SQL View in our programs, but we should not.
Question: How can we use CDS views?
Answer: Basically, a CDS View is an entity that can be addressed by its name:
in ABAP as a TYPE
in Open SQL as a DATA SOURCE

Basically, a CDS View is an entity that can be addressed by its name in ABAP as a TYPE in
Open SQL as a data source

Seeing a CDS View in SE11 is kind of a technical artifact and we should not address the
database view that is shown there in our ABAP programs. From SE11 you can also navigate to
the database object that is generated from the definition. This database object can even be
accessed directly with Native SQL.

This means we can access our CDS Views directly in ABAP programs or from elsewhere. For
evaluating the semantic properties (annotations) of a CDS View (stored in system tables) we
should use an appropriate API (CL_DD_DDL_ANNOTATION_SERVICE if available in your system).
The database views created from the CDS source code are merely for technical reasons.
The CDS source code and the CDS entity defined there should be the real thing.
Question: What are the Salient Features of CDS?
1. Semantically Rich Data-Models
2. Domain specific languages (DDL, QL, DCL)
3. Declarative, close to conceptual thinking
4. CDS is completely based on SQL
5. Any Standard SQL features (like joins, unions, built-in functions) is directly available in
CDS
6. Fully Compatible with Any DB
7. Generated and managed SQL Views
8. Native integration in SAP HANA
9. Common Basis for Domain-Specific Framework e.g. UI, Analytics, Odata, BW,
@AnalyticsDetails.aggregationBehaviour: SUM
10 Built-in Functions and Code Pushdown
11 Table Functions for Breakout Scenarios
12 Rich Set of Built-in SQL Functions
13 Extensible
14 On model level thru extensions
15 On meta-model level thru annotations

Summary of Core Data Services


SAP claims that whereas a traditional database view is just a linkage of one or more tables, a
CDS view is a fully fledged data model, which, in addition to having extra features that SE11-
defined views do not, can be used even by applications outside of the SAP domain.

Note: We cannot do OUTER JOINs in an SE11 database view (just one limitation to point which
CDS can overcome).

Technically, CDS is an enhancement of SQL which provides us with a data definition language
(DDL) for defining semantically rich database tables/views (CDS entities) and user-defined
types in the database.
The enhancements include:
i) Annotations to enrich the data models with additional (domain specific) metadata. An
annotation is a line of code that starts with an @ sign.
ii) Associations on a conceptual level, replacing joins with simple path expressions in queries
iii) Expressions used for calculations and queries in the data model
CDS views, like the well-known dictionary views created and maintained in transaction SE11,
are managed by the ABAP data dictionary. During activation, a database view is created on
the HANA layer, yet only the ABAP CDS view (defined in a so-called DDL source) has to be
transported via the ABAP Change and Transport System (CTS). Moreover, the functionality
provided by CDS views can be used on all SAP supported databases, we dont have to worry
when transporting these objects in a heterogeneous system landscape.
CDS views are entities of the ABAP CDS in the ABAP Dictionary that are much more advanced
than the classical SE11 views. We can influence CDS views with parameters that can
be used at different positions of the DCL. As for classical SE11 views, for a CDS View, a
platform dependent runtime object is generated at the database that we can examine in
SE11. When accessing a (CDS) view with Open SQL (i.e ABAP), the database interface accesses
this runtime object. A CDS view is created with a source code based editor in Eclipse using a
DDL (which ha nothing to do with SQLScript).
For technical reasons, from the source code a classical DB view is generated in SE11 that we
can access like any classical view, but we shouldnt. Instead, the so-called CDS entity should
be accessed because it carries more meaning than the mere technical DB view and involves
new kind of client handling.
PS: In an upcoming release, the direct access to the DB view of a CDS view will be declared
as obsolete. So, better not to use them if it can be avoided.
We use CDS to model large parts of our application in the Dictionary and use simple Open
SQL SELECTs in ABAP for relatively straight joins and subqueries in ABAP. Some day Open SQL
might have the same power like CDS but it doesnt mean that those are redundant. Already
before CDS, we had the choice between creating a reusable view in SE11 or programming a
join in Open SQL in ABAP. As a rule of thumb, we created a view if it is used in more than
one program and programmed a join when we needed it only once. That is very similar for
CDS, but with much more possibilities for modeling semantically rich models for reuse in
ABAP programs.

CDS is open. It is not restricted to HANA (but performance can be different in different DB).

Deep Dive into CDS Views

there are two components of CDS Views in HANA.

DDL SQL View : It is read-only classical database view which is visible in ABAP Dictionary
(SE11). It cannot be edited in SE11.
CDS View Entity: It is the DDL Source File and the actual CDS View. It is a Database
Object which is visible in Eclipse/HANA Studio/ADT and we cannot view CDS View Entity in
SE11. It covers the CDS Database view and makes other attributes possible, such as
authorization checks defined in CDS view.

Before I show, how CDS View is created in HANA ADT, let me start with, how CDS View can
be deleted.

Question: Do we need to delete both the Dictionary DDL SQL and CDS View individually?
Answer: No.

Question: Can we delete DDL SQL to delete the CDS View?


Answer: No.

Check the below image, I am trying to delete the DDL SQL which is created when CDS View is
created.
HANA does not allow me to delete this independently. Generated DDL SQL views cannot be
deleted.
So we are left with CDS View entity. And you guessed it right. Check the below images, we
can delete CDS View entity.

Question: What happens to DDL SQL View when CDS View (DDL Source) is deleted?
Answer: They are twins. They cannot be separated even by death.
DDL SQL is automatically deleted when the CDS View is deleted. Check the image below, both
are deleted in one go.

Now, let us see how we can create a CDS View. There are ample tutorials available on this
topic. We would be short and would show what is new, in the below image.
In all other tutorials, you would see that DDL Source is below Dictionary. In our image above,
check it is below Core Data Services folder. HANA and SAP are evolving at great pace. We
need to keep up with their pace.
The above images are self-explanatory. Let us pause at the final step of the wizard. As of
now, SAP is kind enough to provide six templates for creating the CDS View as per our
need. ABAPers feared they might not be able to learn SQL and remember the syntaxes. Look,
SAP already thought for us. ABAPers are not going anywhere.
In most of the tutorials on CDS View in other blogs, you might have seen only first 5
templates. You would find the sixth template Define Table Function with Parameters
now. SAP and HANA innovation team are really fast (like their in-memory system) .

When you actually get a chance to make your hand dirty in HANA ADT, do not be surprised if
you find more that 6 templates.

Let us select the first template Define View and hit Finish button.

Here system expects us to christen our DDL SQL View Name. We also need to provide the
data_source_name (i.e the table or view from where data would be selected). As pointed out
in the previous article, it is a good idea to separate SQL View Name and actual CDS View
Name. For consistency, we name SQL View Name with DDLS and CDS View with CDSV. You
might have a different naming convention in your project.
For our example the SQL View Name is YDDLS_WO_STAT and CDS View is YCDSV_WO_STATUS.

What is the maximum length of the name which we can give to the SQL View Name?

Look at the first four auto-generated lines. They precede with @. They are
called Annotation.

Additional information and properties can be specified to the CDS Views using
Annotations. For example @ClientDependent annotation lets us set whether the CDS View
is Client Dependent or not. In the above example, it is client dependent (by default).
Annotations also can be used for specifying the Buffer Status (switched on/off) and Buffer
Type (single/generic/fully) of the CDS View.

Annotations enrich the data models with additional (domain specific) metadata.

In laymans words, Annotations are extensions of CDS to SQL.

Annotation AbapCatalog.sqlViewName is mandatory for the definition of a Core Data


Services (CDS) view.

Also, check the Outline window section in the left side corner. It shows the CDS views
breakups. source data table / view, CDS View key and field list.

Now, let us try to open the CDS View entity in SE11.


Oops, it is not meant for Data Dictionary.

Open the DDL SQL CDS View in SE11.

No problem to view it. We can even display the data pulled by the view.
Transports for CDS View

Ok, while creating the CDS View, it asked for the transport where we wanted to save our
generated objects. What do you think, did both DDL SQL View and CDS View entity get
saved in that transport? Or do you think otherwise?

Let us check it for ourselves.

You can see, only the CDS View entity is saved in the transport.
All change objects and transports are managed in the ABAP layer end to end. We do not need
to go to the lower underlying database (HDB) level to manage the transport of CDS
entities/artifacts.
Join in CDS View

While creating the new CDS View, let us select the Define View with Join template. As
discussed, we need to type our ABAP Dictionary (DDL) SQL View name. In addition, we need to
replace the auto-generated data_source_name and joined_data_source_name along with its
element names.
For our example, we have joined the Status table and Status text. Join is the same as we have
been doing in ABAP.
Check the output of the Joined CDS View.

Parameters in CDS View

ABAPers are familiar with the term Parameter. Just like we can have PARAMETERs in the
selection screen of a report, similarly we can have Parameters on CDS Views. Do not be too
optimistic, we do not have SELECT OPTION in CDS View till now.

We know, Parameter helps to filter the data and is useful in WHERE Clause.

CDS View with Parameters is normally created to filter the data during selection process at
database level itself (i.e additional filtration is done using CDS View with Parameters). So,
there is no need to put additional filtering (where condition) at ABAP Layer. Code to Data
shift (one of the motivations of Core Data Services).

Let us see how we can define a CDS View with Parameter.


Choose the template Define View with Parameters and provide the DDL SQL View name (data
dictionary) and data source name as done in above examples. In addition to that, provide
the parameter name and parameter type. We can have multiple parameters in a CDS View,
separated by a comma.
Check the usage of parameters in the above image. If we define CDS View with multiple
parameters separated by a comma, we can have multiple parameters in the WHERE Clause
separated by AND/OR. Also, note that $ sign needs to be provided with parameters while
using it in WHERE Clause.

1
2 with parameters p_stat: j_status,
3 p_lang: spras
4
5 WHERE jcds.stat = $parameters.p_stat and tj02t.spras = $parameters.p_lang;

Les us see the DDL SQL View (data dictionary) for this CDS View with Parameter and try to
display the content output for this CDS View.

Opps. Data display for views with parameters is not yet supported. Hopefully, SAP

would come up with this option too in near future.

There are other templates like View with Association, Extend View and Table Function with
Parameters. We can cover them some other day. If you cannot wait, please check this
external link. This has the exhaustive demonstration of different CDS Views and its
capabilities.

Usage of CDS View in ABAP Programs


The last thing we want to cover today is how to consume a CDS View in ABAP Program.

We can use the CDS View like any other table or data dictionary view in ABAP. I found the
usage of CDS View with Parameters little tricky. Please check the below code snippet for
usage of CDS View with Parameters. Let me accept up front that the below program does not
show the real power of CDS View. It is only for demonstration.

1
2 SELECT * FROM ycds_wo_stat_txt_para( p_stat = @p_status ) INTO TABLE @i_wo_status.

You would notice below that @ symbol is used for escaping of host variables. They help to

identify ABAP work areas/variables/constants in Open SQL statement. Literals need not be
escaped using @. If we decide to escape one host variable, all host variables should be
escaped.

Also, we can select from both DDL SQL View and CDS View. So, we need to declare the
internal tables/work areas according to the View you intend to use. Although DDL SQL View
and CDS View are mirror images still you cannot use the TYPE statement interchangeably in
the program.

Question: In our previous article, we suggested that SE11 Data Dictionary DDL SQL View
should not be normally used. Why?
Answer: If we consume DDL SQL View in ABAP SELECT statement, then, it will act as any
other normal view/table which is created in data dictionary using SE11. We would not be
taking real advantage of HANA. We would not see the performance improvement.
Theoretically, when the DDL SQL View is used, a database connection from ABAP Layer to
Database Layer is established and this process would consume some resources for database
connection (even though your database in HANA).

Question: Why is it good practice to use CDS View Entity (DDL Source) while using ABAP
SELECT statement?

Answer: By now we have a fair idea that CDS View Entity (DDL Source) is a database object
which is known to ABAP Layer and does not exist in data dictionary (SE11). This database
object contains SQL power and resides at the database layer. Consumption of CDS View by
DDL Source name invokes Database Object which is residing at Database Layer i.e., SQL inside
the DDL Source Name at Database layer (DDL Source). This way, we can execute an SQL
without creating a database connection between ABAP Layer and Database. Only results will
be transferred back to ABAP layer. This will save resources for creating a database connection
from ABAP Layer to Database Layer.

I would like to request HANA Experts to provide some more insight and justification of using
CDS View Entity (DDL Source) in SELECTs.
Finally, the program to show usage of CDS View with Parameter.
Prior to release 740, if we had the requirement to add an additional column in the output
which did not exist in SAP table with some custom logic, then we usually wrote something like
below.

We defined the TYPES. We looped through the table and added the custom logic (High
Purchase or Low Purchase) as shown below.

1
2 TYPES: BEGIN OF ty_ekpo,
3 ebeln TYPE ebeln,
4 ebelp TYPE ebelp,
5 werks TYPE ewerk,
6 netpr TYPE bprei,
7 pur_type TYPE char14,
8 END OF ty_ekpo.
9
10 DATA: it_ekpo TYPE STANDARD TABLE OF ty_ekpo.
11
12 FIELD-SYMBOLS <fs_ekpo> TYPE ty_ekpo.
13
14 SELECT ebeln ebelp werks netpr
15 FROM ekpo
16 INTO TABLE it_ekpo.
17
18 LOOP AT it_ekpo ASSIGNING <fs_ekpo>.
19
20 IF <fs_ekpo>-netpr GT 299.
21 <fs_ekpo>-pur_type = 'High Purchase'.
22 ELSE.
23 <fs_ekpo>-pur_type = 'Low Purchase'.
24 ENDIF.
25
26 ENDLOOP.
27
28 IF it_ekpo IS NOT INITIAL.
29 cl_demo_output=>display_data(
30 EXPORTING
31 value = it_ekpo
32 name = 'Old AGE SQL : 1' ).
33 ENDIF.
Let us see how we can achieve the same thing in a new way. With ABAP 740 and above, we
get rid of TYPES, Data Declaration and Loop. Isnt it cool?

Sample 1 ( Using comma separated fields with inline data declaration and usage of CASE
for reference fields)

1
2 SELECT ebeln, ebelp, werks, netpr,
3 CASE
4 WHEN netpr > 299
5 THEN 'High Purchase'
6 ELSE 'Low Purchase'
7 END AS pur_type
8 FROM ekpo
9 INTO TABLE @DATA(lt_sales_order_header).
10
11 IF sy-subrc = 0.
12 cl_demo_output=>display_data(
13 EXPORTING
14 value = lt_sales_order_header
15 name = 'New AGE SQL : 1' ).
16 ENDIF.

Outputs from both the above techniques are same. But the path does matters. Isnt
it?
If you have some confusion regarding HANA, check this popular post: SAP HANA from Space
Level.

Next, let us check the powerful inbuilt functions in SELECT.

Sample 2 ( Using JOIN and COUNT / DISTINCT functions in SELECT )

1
2 PARAMETERS: p_matnr TYPE matnr,
3 p_lgort TYPE lgort_d.
4
5 SELECT mara~matnr,
6 mard~lgort,
7 COUNT( DISTINCT ( mard~matnr ) ) AS distinct_mat, " Unique Number of Material
8 COUNT( DISTINCT ( mard~werks ) ) AS distinct_plant, " Unique Number of Plant
9 SUM( mard~labst ) AS sum_unrest,
10 AVG( mard~insme ) AS avg_qlt_insp,
11 SUM( mard~vmspe ) AS sum_blocked
12 FROM mara AS mara INNER JOIN mard AS mard
13 ON mara~matnr EQ mard~matnr
14 INTO TABLE @DATA(lt_storage_loc_mat)
15 UP TO 1000 ROWS
16 WHERE mard~matnr = @p_matnr
17 AND mard~lgort = @p_lgort
18 GROUP BY mara~matnr,
19 mard~lgort.
20
21 IF sy-subrc = 0.
22 cl_demo_output=>display_data(
23 EXPORTING
24 value = lt_storage_loc_mat
25 name = 'New AGE SQL : 2' ).
26 ENDIF.
27
28

DISTINCT Material is 1 and DISTINCT Plant is 2. SUM for the Unrestricted stock is 2, AVG is 2/2
= 1 and SUM of Blocked stock is 2. This is just a sample to showcase how versatile and
powerful the SELECT statement has become.
Next, in our menu, today is the Mathematical Operators in SELECT. Check the below snippet
where we can directly assign 10 (as rebate percent) which would be in the internal table.
CEIL function, multiplication, subtraction etc can be handled during the SELECT statement. If
we were not in 740, we would have needed a separate loop and bunch of code to achieve this
function. Isnt ABAP real modern now?

Sample 3 ( Using vivid mathematical operators in SELECT )

1
2 DATA: lv_rebate TYPE p DECIMALS 2 VALUE '0.10'.
3
4 SELECT ebeln,
5 10 AS rebate_per,
6 CEIL( netpr ) AS whole_ord_net,
7 ( @lv_rebate * netpr ) AS rebate,
8 ( netpr - ( @lv_rebate * netpr ) ) AS act_net
9 FROM ekpo
10 USING CLIENT '130'
11 UP TO 10 ROWS
12 INTO TABLE @DATA(lt_po_data).
13
14 IF sy-subrc = 0.
15 cl_demo_output=>display_data(
16 EXPORTING
17 value = lt_po_data
18 name = 'New AGE SQL : 3' ).
19 ENDIF.
Not only Mathematics is fun with ABAP 740, but also logical programming. Continue below to
taste the new flavour.

Sample 4 ( Using Complex Case statement on non-referenced fields i.e. multiple in one
Select )

1
2 PARAMETERS: p_werks TYPE werks_d.
3 DATA:
4 lv_rebate TYPE p DECIMALS 2 VALUE '0.10',
5 lv_high_rebate TYPE p DECIMALS 2 VALUE '0.30'.
6
7 SELECT ebeln,
8 werks,
9 CEIL( netpr ) AS whole_ord_net,
10 ( @lv_rebate * netpr ) AS rebate,
11 ( netpr - ( @lv_rebate * netpr ) ) AS act_net,
12
13 CASE WHEN werks = @p_werks " For specific plant
14 THEN @lv_rebate
15 ELSE @lv_high_rebate
16 END AS rebate_type,
17
18 CASE WHEN werks = @p_werks " For specific plant
19 THEN 'low rebate'
20 ELSE 'high rebate'
21 END AS low_high
22
23 FROM ekpo
24 USING CLIENT '130'
25 UP TO 25 ROWS
26 INTO TABLE @DATA(lt_po_data).
27
28 IF sy-subrc = 0.
29 cl_demo_output=>display_data(
30 EXPORTING
31 value = lt_po_data
32 name = 'New AGE SQL : 4' ).
33 ENDIF.
34
35

COALESCEs literal meaning from the dictionary is come together and form one mass or
whole or combine (elements) in a mass or whole.

According to SAP documentation, the COALESCE function in Open SQL returns the value of
the argument arg1 (if this is not the null value); otherwise, it returns the value of the
argument arg2. A blank must be placed after the opening parenthesis and before the closing
parenthesis. A comma must be placed between the arguments

Check the usage below. If data for ekko~lifnr is present (means PO is created for the lessor)
then the LIFNR (Vendor Number) from EKKO is printed else, No PO literal is updated. This
function is quite handy in many real practical scenarios.

Sample 5 ( Using COALESCE and Logical operators like GE / GT/ LE / LT etc in JOIN which
was originally not available

1
2 SELECT lfa1~lifnr,
3 lfa1~name1,
4 ekko~ebeln,
5 ekko~bukrs,
6 COALESCE( ekko~lifnr, 'No PO' ) AS vendor
7 FROM lfa1 AS lfa1 LEFT OUTER JOIN ekko AS ekko
8 ON lfa1~lifnr EQ ekko~lifnr
9 AND ekko~bukrs LT '0208'
10 INTO TABLE @DATA(lt_vend_po)
11 UP TO 100 ROWS.
12
13 IF sy-subrc = 0.
14 cl_demo_output=>display_data(
15 EXPORTING
16 value = lt_vend_po
17 name = 'New AGE SQL : 5' ).
18 ENDIF.

How many times and in how many projects did you have the requirement to print Plant and
Plant description together like 0101 (Houston Site) or in forms you had the requirement to
write Payee (Payee Name)? We achieved it by looping and concatenating. We did not have
better option earlier, but now we can do it while selecting the data. Thanks to the SAP
Development Team.

Sample 6 (Concatenation while selecting data )

1
2 SELECT lifnr
3 && '(' && name1 && ')' AS Vendor,
4 ORT01 as city
5 FROM lfa1
6 INTO TABLE @DATA(lt_bp_data)
7 UP TO 100 ROWS.
8 IF sy-subrc = 0.
9 cl_demo_output=>display_data(
10 EXPORTING
11 value = lt_bp_data
12 name = 'New AGE SQL : 6' ).
13 ENDIF.
Every report/conversion/interface asks us to validate the input data and we do it by checking
its existence in the check table. That has become easier and better now like shown below.

Sample 7 ( Check existence of a record )

1
2 SELECT SINGLE @abap_true
3 FROM mara
4 INTO @DATA(lv_exists)
5 WHERE MTART = 'IBAU'.
6 IF lv_exists = abap_true.
7 WRITE:/ 'Data Exists!! New AGE SQL : 7'.
8 ENDIF.

ABAP was always a fifth generation programming language and it has become more so. It has
become more readable and real life syntactically too. . HAVING function is another

feather to the crown.

Sample 8 ( Use of HAVING functions in SELECT )

1
2 SELECT lfa1~lifnr,
3 lfa1~name1,
4 ekko~ebeln,
5 ekko~bukrs
6 FROM lfa1 AS lfa1 INNER JOIN ekko AS ekko
7 ON lfa1~lifnr EQ ekko~lifnr
8 AND ekko~bukrs LT '0208'
9 INTO TABLE @DATA(lt_vend_po)
10 GROUP BY lfa1~lifnr, lfa1~name1, ekko~ebeln, ekko~bukrs
11 HAVING lfa1~lifnr > '0000220000'.
12
13 IF sy-subrc = 0.
14 cl_demo_output=>display_data(
15 EXPORTING
16 value = lt_vend_po
17 name = 'New AGE SQL : 8' ).
18 ENDIF.

Remember, sometimes we need to select all fields of more than one table and provide custom
names in the output. Wasnt it tiresome to create TYPEs and achieve our requirement?

Sample 9 ( Use of selection of all columns with renaming of fields. This is handy in case
you have to do all field select )

I thought with ABAP 740, I could do the below.


1
2 SELECT jcds~*,
3 tj02t~*
4 FROM jcds INNER JOIN tj02t
5 ON jcds~stat = tj02t~istat
6 WHERE tj02t~spras = @sy-langu
7 INTO TABLE @DATA(lt_status)
8 UP TO 1000 ROWS.
9 IF sy-subrc = 0.
10 cl_demo_output=>display_data(
11 EXPORTING
12 value = lt_status
13 name = 'New AGE SQL : 9' ).
14 ENDIF.

The above code is syntactically correct. Wow!! I was so excited to test it as it would show all
columns from both the tables.

OOPs!! We get the above message. Too early to be so happy.

Let us modify the same code a little bit. We need to define the TYPEs and declare the
internal table (Inline did not work above).

1
2 TYPES BEGIN OF ty_data.
3 INCLUDE TYPE jcds AS status_change RENAMING WITH SUFFIX _change.
4 INCLUDE TYPE tj02t AS status_text RENAMING WITH SUFFIX _text.
5 TYPES END OF ty_data.
6
7 DATA: lt_status TYPE STANDARD TABLE OF ty_data.
8 SELECT jcds~*,
9 tj02t~*
10 FROM jcds INNER JOIN tj02t
11 ON jcds~stat = tj02t~istat
12 WHERE tj02t~spras = @sy-langu
13 INTO TABLE @lt_status
14 UP TO 100 ROWS.
15
16 IF sy-subrc = 0.
17 cl_demo_output=>display_data(
18 EXPORTING
19 value = lt_status
20 name = 'New AGE SQL : 9' ).
21 ENDIF.

Check _CHANGE is added to the field name. _TEXT is also added in the column name from
second table (not captured in the screen print below)

These were just the tip of the icebergs. We would stumble upon more features and surprises
as we work on projects in real system. Just to let you know, all the above code snippets are
from a traditional database (not HANA) which has EhP 7.4. So do not confuse that we need
HANA database to take advantage of modern SQL techniques. We just need near and above
EhP 7.4.

SQL Script and SAP HANA Stored Procedure

Introduction to SQL Script and SAP HANA Stored Procedure

In the previous post (New Age SQL for ABAP), we explored the modern SQL which helps to
push the code to the database and helps us with performance improvement. Also, the new
age SQL is concise and allows us to do kinds of stuff which were never possible in ABAP
earlier. In this article, we would check SQL Script and basic about Stored Procedures.

SQL Script Definition?

SAP HANA SQL document says: SQL Script is a collection of extensions to the Structured
Query Language (SQL).

Google/Wiki says: An SQL script is a set of SQL commands saved as a file in SQL Scripts. An
SQL script can contain one or more SQL statements or PL/SQL blocks. You can use SQL Scripts
to create, edit, view, run and delete script files.

SAP further simplifies, SQL Script is and extension to ANSI Standard SQL. It is an interface for
applications to access SAP HANA database. SQL Script is the language which can be used for
the creation of stored procedures in HANA.
It can have Declarative and Orchestration Logic.
SELECT queries and Calculation Engine(CE) functions follow Declarative Logic.
DDL, DML, Assignment and imperative follow Orchestration Logic.

Data transfer between database and application layer can be eliminated using SQL Script.
Calculations can be executed in the database layer using SQL Script to obtain maximum
benefit out of SAP HANA database. It provides fast column operations, query optimization and
parallel execution (you will read these lines time and again in different language in this
post).

Motivation?

SQL Script can be utilized to write data-intensive logic into the database instead of writing
code in the application server. Before ABAP 740, most of the data needed for manipulation
were copied from database to the application server and all calculation, filtration and other
logic were implemented on these data. This technique is a strict No-No for optimization and
performance improvement of the ABAP code. SQL Script helps to take maximum benefit of
modern hardware and software innovation to perform massive parallelization on multi-core
CPUs.

SAP suggests, SQL Script comes into picture when HANA modeling constructs like Analytic or
Attribute views fail. Someones failure is other ones success.

Why?

Simply for Code to Data(base) shift.

Data transfer between database and application layer can be eliminated using SQL Script.
Calculations can be executed in the database layer using SQL Script to obtain maximum
benefit out of SAP HANA database. It provides fast column operations, query optimization and
parallel execution (you will read these lines time and again in different languages in this
post).

What is SQL Script? Why we need SQL Script? What is the motivation for having SQL
Script?

Did we not answer these What, Why and What above? OK, lets start from the beginning. The
relational database model RDBMS was introduced back in 1970s by Edger F. Codd (you
might remember from college curriculum. something ringing? or did I help you remember one
of your beautiful/cute crushes from your college days).
As per RDBMS, the Database must be normalized 1NF, 2NF, 3NF, BCNF and 4NF in order to
have ACIDproperties of the data.

Google says: In computer science, ACID (Atomicity, Consistency, Isolation, Durability) is a set
of properties of database transactions. Read more about ACID properties here.

A simple example would be splitting of data into Header and Item to pass the ACID test. In
other words, data is stored in two-dimensional tables with the foreign key relationship
instead of having redundant rows and columns in one table.

But the use of digital media has exploded in the recent past both in the consumer world and
enterprise world (in a way both are the same thing). This has led to an exponential increase
in the amount of the data being stored in the databases. On the other hand, the expectation
from users is minimum response time, in some cases zero response time.

We can take the example of TATKAL IRCTC online train ticket booking. There will be few
Hundred Thousand if not Million users who want to book a Tatkal ticket and expectation is
there should not be any delay from the system. 2 Hundred Thousand transactions (form fill
up, validation, payment using credit/debit card or online banking) per minute was one of the
criteria for the vendor for IRCTC quote.

For our readers who are outside India, TATKALs literal English translation is INSTANT.
You can consider TATKAL train booking as the Amazon Black Friday Sale of iPhone 6S at 99$.
The sale begins exactly at 10:00 AM on 11/24/2016 till stock last. Isnt iPhone 6S at 99$ an
amazing deal? Even if you already have iPhonse 6S, you would still try to buy it. Exactly at
10:00 AM, thousands of users try to order that phone. Most users cannot log in, the system is
hung. Some lucky who are able to log in, are not able to hit BUY button. Few others who
were successful at hitting the BUY button are still waiting for Payment to be entered. Few
lucky who have successfully entered the Payment get the final message, Sorry, iPhone 6S is
out of stock. Please try later.

HANA is able to deliver this. Absolutely no response lag time by using the techniques which
are both hardware and software innovation. Hence it is called as Appliance and not just any
Database. This is a separate topic altogether which we have covered in SAP HANA from
Space Level.

Now, if we want to use the power of fast computing of HANA Database, we have to push all
the data intensive computations from application server (ABAP Server) to HANA Database
layer. Here SQL Script plays the major part in doing this.

Like any SQL language, SQL Script is used for querying the Database, in this case, HANA
Database. SQL Script is as per SQL 92 Standards. This is the sole language used for writing
Stored Procedures in the HANA Database.

How does it differ from SQL statements in ABAP?

i) Normal SQL returns only one result set while SQL Script can return multiple results.
ii) Modularization is possible in SQL Script i.e. humungous intricate business logic can be split
into smaller pieces of code which are more readable and understandable.
iii) Local variables for the transitional result can be defined in SQL Script. Normal SQL needs
globally visible data types/views for intermediate logic.
iv) Control statements like IF/ELSE are available in SQL Script but not in normal SQL

SQL Script follows the Code to Data Paradigm with pushing of data intensive computations to
HANA Database. With this, it eliminates the transfer of data from DB to the application
server aka ABAP AS. This fully exploits the capability of HANA database achieving
the maximum throughput with absolutely no response time.
SQL Script is a very powerful tool. We have always avoided using joins, ordering by clause in
ABAP SQL statements. All these are welcome in ABAP 740. We can also use query inside a
query etc.

SQL statements can be broadly divided into below three categories:

Data Manipulation Language (DML) SELECT, INSERT, UPDATE


Data Definition Language (DDL) CREATE , ALTER DROP
Data Control Language (DCL) GRANT ,REVOKE

SQL Script also supports the below primitive data types:

TINYINT, SMALLINT, INTEGER, BIGINT, DECIMAL (p, s), REAL, FLOAT, DOUBLE, VARCHAR,
NVARCHAR, CLOB, NCLOB, VARBINARY, BLOB,DATE, TIME, TIMESTAMP

Read more about primitive data types here.

Table Creation and Alteration

We can create a table by using the GUI or by writing SQL Statement.

a) Create using SQL Statement

1
2 create column table "<Schema_name>"."ZZSTUDENT"( "ROLLNUMBER" NVARCHAR (10) not null,
3 "NAME" NVARCHAR (10) ,
4 "YEAR" NVARCHAR (4) );

Our schema name was SYSTEM. So, out SQL looks like below.

1
2 create column table "SYSTEM"."ZZSTUDENT"( "ROLLNUMBER" NVARCHAR (10) not null,
3 "NAME" NVARCHAR (10) ,
4 "YEAR" NVARCHAR (4) );

Hopefully, you know by now that you need to be in SAP HANA Development Perspective and
choose your schema and write at the SQL Console. When you hit execute, the table is
created.
b) Create using GUI

For GUI Click on New Table and for SQL Script above Click on Open SQL Console

Both (SQL and GUI) achieve the same function of creating the table

Hit execute button

The tables that are created will be available in the respective Schema.
1
2 CREATE COLUMN TABLE "<SCHEMA_NAME>"."ZZENROLL"( "CODE" NVARCHAR (10) NOT NULL,
3 "ROLLNUMBER" NVARCHAR (10) NOT NULL,
4 "YEAR" NVARCHAR (4) );
5
6 CREATE COLUMN TABLE "<SCHEMA_NAME>"."ZZCOURSE"( "CODE" NVARCHAR (10) NOT NULL,
7 "NAME" NVARCHAR (10));

With the above statements, we have created Column tables (ZZENROLL, ZZCOURSE), along
with these, we can also create a Table Type (LT_OUT) and Row storage tables.

1
CREATE TYPE "<SCHEMA_NAME>"."LT_OUT" AS TABLE ( "STUDENT_NAME" VARCHAR (10) NOT
2
NULL,
3
"COURSE_CODE" VARCHAR (10),
4
"COURSE" VARCHAR (10));

Some examples of ALTER TABLE


a) Adding new field

1
2 ALTER TABLE "<SCHEMA_NAME>"."ZZSTUDENT" ADD ("CITY" VARCHAR (10) NULL);

b) Altering/Changing field type

1
2 ALTER TABLE "<SCHEMA_NAME>"."ZZSTUDENT" ALTER ("CITY" VARCHAR (30) NULL);

Changed type from 10 VARCHAR to 30 VARCHAR.

c) Altering Table Type

1
2 ALTER TABLE "<SCHEMA_NAME>"."ZZSTUDENT" ALTER TYPE ROW;

Insert Data into table

Data can be inserted using SQL Console. Below are some examples:

1
2 INSERT INTO "<SCHEMA_NAME>"."ZZSTUDENT" VALUES ( '10', 'SACHIN', 'MUMBAI');
1
2 INSERT INTO "<SCHEMA_NAME>"."ZZCOURSE" VALUES('100','HINDI');
3 INSERT INTO "<SCHEMA_NAME>"."ZZCOURSE" VALUES('200','ENGLISH');
4 INSERT INTO "<SCHEMA_NAME>"."ZZCOURSE" VALUES('300','MATHS');
1
2 INSERT INTO "<SCHEMA_NAME>"."ZZENROLL" VALUES ( '100', '10', '2005');
3 INSERT INTO "<SCHEMA_NAME>"."ZZENROLL" VALUES ( '200', '10', '2005');
4 INSERT INTO "<SCHEMA_NAME>"."ZZENROLL" VALUES ( '300', '10', '2005');

SQL query examples

Lets us see some of the SQL Query Examples on the above data which we have populated.

a) Lets start with a simple query

1
2 SELECT NAME
3 FROM "<SCHEMA_NAME>"."ZZSTUDENT"
4 WHERE ROLLNUMBER = '10';
b) Nested Select or Select inside a Select (name of students who have enrolled for course
code 100)

1
2 SELECT NAME
3 FROM "<SCHEMA_NAME>"."ZZSTUDENT"
4 WHERE ROLLNUMBER IN (SELECT ROLLNUMBER
5 FROM "<SCHEMA_NAME>"."ZZENROLL"
6 WHERE CODE = '100');

c) A join example

1
2 SELECT A.NAME AS STUDENT_NAME,
3 B.CODE AS COURSE_CODE,
4 C.NAME AS COURSE
5 FROM "<SCHEMA_NAME>"."ZZSTUDENT" AS A
6 INNER JOIN "<SCHEMA_NAME>"."ZZENROLL" AS B
7 ON A.ROLLNUMBER = B.ROLLNUMBER
8 INNER JOIN "<SCHEMA_NAME>"."ZZCOURSE" AS C
9 ON B.CODE = C.CODE
10 WHERE C.CODE = '100';
These are very basic examples, only for the concept. In real time it would not be this
simple. Hope the above examples give you a hang of SQLScript. It might be a little different
for ABAPers but it is not entirely from another planet. We have been writing Open SQL in
ABAP and the above SQL Scripts are our nearest cousins. Nothing to be scared of.

Stored Procedure

Stored Procedure is the natural choice for the next topic as SQL Script is the only
language used for creating Stored Procedures. A procedure is a unit/block of related code
that performs a certain task. ABAPers can relate Stored Procedures as
the subroutines or methods (not truly though). The motivation for having the procedure
is reusability.

All the advantages of SQL Scripts are there in Stored Procedures. SAP HANA procedures
help us to put data-intensive complex logic into the database, where it can be fine tuned and
optimized for performance and return the small result set. Procedures help to control
the network and processor loadby not transferring large data volume from database layer to
application layer. Stored Procedures can return multiple scalar (single
value), tabular/array result which is not possible in normal SQL. Like in ABAP
programming, local variables can be declared and used in Procedures and hence we do not
need to create temporary tables to be used for storing intermediate data as in the case of
normal SQL.

General rule

Each statement is to be completed with a semicolon ; and variable assignment is done


using colon :.
An example of Stored Procedure using SQL Console. Please note we need to create Procedure
in SAP HANA Modeler Perspective.

1
2 CREATE PROCEDURE _SYS_BIC.ZZPROCEDURE(
3 IN IV_CODE NVARCHAR(10),
4 OUT LT_OUTPUT <SCHEMA_NAME>."LT_OUTPUT")
5 LANGUAGE SQLSCRIPT
6 SQL SECURITY INVOKER AS
7
8 /********* Begin Procedure Script ************/
9 BEGIN
10 LT_OUTPUT = SELECT A.NAME AS STUDENT_NAME,
11 B.CODE AS COURSE_CODE,
12 C.NAME AS COURSE
13 FROM "<SCHEMA_NAME>"."ZZSTUDENT" AS A
14 INNER JOIN "<SCHEMA_NAME>"."ZZENROLL" AS B
15 ON A.ROLLNUMBER = B.ROLLNUMBER
16 INNER JOIN "<SCHEMA_NAME>"."ZZCOURSE" AS C
17 ON B.CODE = C.CODE
18 C.CODE =:IV_CODE;
19 END;
20 /********* End Procedure Script ************/

One can create Stored Procedure with the help of GUI. This is much faster and one tends to
have less number of human error.

Right click on content -> select the Procedure


Put the SQL Script (same as above) in between BEGIN and END (ideally Output and Input
Parameters should be created).

Create the output parameters: Right click on Output, Input Parameters and declare the
name and types.
Click on save and validate

Click on activate

To test the procedure created above, we need to call the procedure in the SQL Console.
Generic syntax for calling procedure is below.
1
2 CALL PROCEDURE_NAME (values1, values2 ,......);

For our example

1
2 CALL ZZPROCEDURE(100,?)

Food for thought: Check what error we get if we just write below SQL without ? as second
parameter.

1
2 CALL ZZPROCEDURE(100)

Database Procedure Proxy

We have created a procedure in HANA Database. Till now, only the half part is done. If we
want to achieve the Code Push Down Paradigm then the next part would be calling the
procedure in SAP ECC. This is achieved using Database Procedure Proxy.

Go to File -> New -> Others -> Database Procedure Proxy.


Provide the HANA Procedure name.

ZZ12MYDBPROXY is the name of the proxy. Choose the transport or save as local.
Click on finish

Click on Activate button as shown below


The same Database Procedure Proxy can be displayed in SE24 at ABAP AS level.

Calling this Database proxy is very much similar to calling a function module/method.

1
2 CALL DATABASE PROCEDURE ZZ12MYDBPROXY
3 EXPORTING iv_code = p_code
4 IMPORTING lt_out = tb_output.

Sample program to consume the HANA Stored Procedure in ABAP and display the output.

1
2 **---------------------------------------------------------------------*
3 ** TYPES *
4 **---------------------------------------------------------------------*
5 TYPES:
6
7 BEGIN OF ty_output,
8 student_name TYPE char10,
9 course_code TYPE char10,
10 course TYPE char10,
11 END OF ty_output.
12
13 **---------------------------------------------------------------------*
14 ** DATA *
15 **---------------------------------------------------------------------*
16 DATA:
17 it_output TYPE TABLE OF ty_output.
18
19 **---------------------------------------------------------------------*
20 ** SELECTION SCREEN *
21 **---------------------------------------------------------------------*
22 SELECTION-SCREEN BEGIN OF BLOCK b1 WITH FRAME TITLE text-s01.
23 PARAMETERS: p_code TYPE char10.
24 SELECTION-SCREEN END OF BLOCK b1.
25
26 **---------------------------------------------------------------------*
27 ** START-OF-SELECTION. *
28 **---------------------------------------------------------------------*
29 START-OF-SELECTION.
30 * Consume the stored procedure in ABAP
31 PERFORM call_db_procedure.
32
33 **---------------------------------------------------------------------*
34 ** END-OF-SELECTION. *
35 **---------------------------------------------------------------------*
36 END-OF-SELECTION.
37 * Display the output
38 PERFORM display_output.
39
40 **&---------------------------------------------------------------------*
41 **& SUB ROUTINES
42 **&---------------------------------------------------------------------*
43
44 *&---------------------------------------------------------------------*
45 *& Form CALL_DB_PROCEDURE
46 *&---------------------------------------------------------------------*
47 * Consume the database procedure
48 *----------------------------------------------------------------------*
49 FORM call_db_procedure.
50
51 * Callling Database Procedure
52 CALL DATABASE PROCEDURE zz12mydbproxy
53 EXPORTING
54 iv_code = p_code
55 IMPORTING
56 lt_output = it_output.
57
58 ENDFORM.
59
60 *&---------------------------------------------------------------------*
61 *& Form DISPLAY_OUTPUT
62 *&---------------------------------------------------------------------*
63 * Display the Report
64 *----------------------------------------------------------------------*
65 FORM display_output .
66
67 * Display the output
68 cl_demo_output=>display_data( it_output ).
69
70 ENDFORM.

Let us test it.

Output

We showed Database Proxy is one way to consume Stored Procedure in ABAP. The other way
is calling it through Native SQL. Let us extend this long post a little longer. This is the last
part, trust me.

Consumption of HANA Stored Procedure in ABAP

Two methods of consuming HANA Stored Procedure in our ABAP programming are:

i) Calling SAP HANA Stored Procedure through Native SQL


ii) Using Database Procedure Proxy to expose HANA Procedure (already seen above)

Both has pros and cons, but Database proxy has an upper hand over the native SQL.

So-called Advantage of Native SQL process over Database Proxy

a) Easy development and lesser effort. Once we have the stored procedure created in the
HANA DB, we just need to write native SQL to access the procedure directly.
b) No extra ABAP artifact means less maintenance. Since there is no other ABAP artifact to be
created (like Database Proxy), less maintenance in this case
c) Native SQL Development can be done in SAP GUI as well as ADT, whereas for DB proxy has
to be done via ADT only

The advantage of Database Proxy over Native SQL process

a) Native SQL Process is a little tedious and prone to human error.


b) Full advantage of ABAP Development Tool can be taken for Database Proxy.
c) ABAP developers would find Database Procedure Proxy call similar to Function
Module/Method calls. Hence more comfortable.
d) In the case of any change in Database Procedure the code changing process is manual. But
for proxy it is semi-manual and the proxy can be synchronized (merged/deleted).

We have just scratched the surface. We need to dig a little more deeper to appreciate the
power of SQL Script and Store Procedures. We can have a separate post in detail on
consumption of Stored Procedures in ABAP. Also, we can check how we can debug the
procedures.

ADBC ABAP DataBase Connectivity

ADBC ABAP DataBase Connectivity

In our earlier post, we learned about Bottom Up Approach in SAP HANA. In this article, we
would check about Database Connectivity. Although the title says, SAP ABAP for HANA, but let
me clarify, ADBC (ABAP DataBase Connectivity) is not the proprietary feature of HANA. This
property is database independent. Years ago even before we heard about HANA, ABAPer used
to connect to underlying database explicitly using native SQL and perform the needful
activity. If you have ever had the opportunity to work in that area, then you would remember
how you used something like below code snippet (with or without knowing what you were
doing).

1
2 EXEC SQL.
3 <Native SQL statements to perform the requirement>
4 ENDEXEC.

Why was there the need to use Native SQL?


Answer: Performance is not always the culprit. . The most generic reason why

Native SQL was used is, the database tables were not available in SAP Data Dictionary. Yes,
you read it right. There are numerous tables in the database, which do not find the dignity of
residing at both places (database and SAP Data Dictionary). And business might be using those
database specific tables for some business case. In such cases, native SQL used to be the life
saver.

Some salient features of Native SQL


1. Native SQL allows us to use database-specific SQL statements in an ABAP program
2. No Syntax check of the SQL statement is performed. If there is an issue, we come to know
only at runtime.
3. The data is transported between the database table and the ABAP program using host
variables. Host variables? Forget it. It is the same work areas and variables (line in open SQL)
which have additional : (colon) in front.
For the sake of clarity:

For the sake of clarity:

1
2 EXEC SQL.
3 SELECT matnr mtart bismt
4 INTO :wa_mara
5 FROM mara
6 WHERE matnr = :p_matnr
7 ENDEXEC.

The above example does not justify the usage of native SQL, as MARA should reside at both
places. Just replace MARA with something like ORA_INV_MGT table which is not available in
SE11. So, in the above example concentrate on : P_MATNR and : WA_MARA (the host

variables).
Let us also recap the salient features of Open SQL
1. Open SQL provides a uniform syntax and semantics for all of the database systems
supported by SAP. Therefore it is called Open. Open to all Database.

What does the above statement mean?


ABAP Programs that only use Open SQL statements will work in any SAP system, regardless of
the below database system.

2. Open SQL statements can only work for database tables that have been created/replicated
in the ABAP Dictionary
3. Open SQL can be used via secondary database connections too

Read more about New Age Open SQL ABAP 740

I think we have built up enough background and refresher to finally come to the topic of the
day, i.e. ADBC. If native SQL was already doing what Open SQL could not do, then what was
the need of introducing another jargon ADBC. Sometimes if you make something look
complex, people tend to think it superior and better. But ADBC is not just

another bombastic word. It is definitely better than native SQL as explained below.

ADBC is an object base API. This API determines where native SQL calls have been made and
supports exception handling better. Technically, ADBC writes native SQL which would be
executed at the database layer. But, ADBC makes the process of connecting to the database
and transferring the native SQL code to be executed at database layer smoother and
organized. In simple terms, the object-oriented approach is used by ADBC to connect to the
database and perform the needed task.

Object Oriented approach bring with it flexibility and ADBC is found in WHERE USED
LIST and also error handling of the same native SQL code is better in ADBC.

Salient feature of ADBC

1. Just like native SQL, syntax checker cannot catch issues in the code which the underlying
database is expecting. We need to handle the exceptions properly (usually cx_sql_exception
is implemented).

2. Hashed and Sorted tables are not allowed as the target. So, the standard table is still the
king.

3. If you are using ADBC, do not forget to handle the client/mandt explicitly in your code.

4. ADBC does not necessarily release the allocated memory/resource on the DB. As a good
practice, we should always close the query.

There are 8 generic steps performed in an ADBC call


1. Set the database connection (CL_SQL_CONNECTION=>GET_CONNECTION)
2. Instantiate the statement object (CL_SQL_STATEMENT)
3. Construct the SQL using Concatenate syntax or string operation (check with SQL Console
for syntax in HANA Studio or use t-code DBACOCKPIT if you are not on HANA DB yet)
4. Issue Native SQL Call (EXECUTE_QUERY, EXECUTE_DDL, EXECUTE_UPDATE)
There are three methods to execute SQL statements.
EXECUTE_QUERY For Queries (SELECT statements). An instance of CL_SQL_RESULT_SET is
returned as the result of the query.
EXECUTE_DDL For DDL (CREATE, DROP, or ALTER). No returning parameter.
EXECUTE_UPDATE For DML (INSERT, UPDATE, or DELETE). Returns the number of table rows
processed in ROWS_PROCESSED.
5. Assign Target variable for result set (CL_SQL_RESULT_SET, methods SET_PARAM(),
SET_PARAM_TABLE())
6. Retrieve Result set (CL_SQL_RESULT_SET=>NEXT_PACKAGE)
7. Close the query and release resources (CL_SQL_RESULT_SET method CLOSE())
8. Close database connection (CL_SQL_CONNECTION; method CLOSE())

Important Classes in ADBC


We have been singing Object Oriented Approach for quite some time in this article, so some
of the classes and methods do need a mention here. What do you guys say? . The

above 8 steps help us narrow down to three important classes in ADBC.

1. CL_SQL_CONNECTION
2. CL_SQL_STATEMENT
3. CL_SQL_RESULT_SET

Error handling is one of the important advantages of ADBC so CX_SQL_EXCEPTION is the fourth
important class in ADBC.

Below code shows the usage of ADBC in ABAP which has HANA as the database. The most
important part is building the native SQL correctly (using string operations or CONCATENATE
statement) as per the database and passing it in the string.
If you are in HANA, it is a good practice to test the native SQL in SQL editor at HANA Studio.

Also Read: Know about SAP HANA Studio icons and buttons

If the database is not HANA and you do not have SQL editor (HANA studio) do not be
disheartened. You can still check the native SQL at DBACOCKPIT. It is shown a little below in
this article.

For HANA Database user, your first ADBC program is below. The ADBC API in the program is
self-explanatory and easy to implement. So, EXEC SQL ENDEXEC would definitely be the
thing of the past. This program is for those lucky ones who are already in HANA database.
Others can scroll down below to find the program for the non-HANA system. This program
would not return any result if you are not in HANA, as the native SQL is dependent on the
database. The native SQL written below is compatible with HANA only.

1
2 * Type for output
3 TYPES: BEGIN OF ty_result,
4 matnr TYPE matnr,
5 mtart TYPE mtart,
6 maktx TYPE maktx,
7 END OF ty_result.
8
9 * Data declaration
10 DATA: lr_sql_connection TYPE REF TO cl_sql_connection,
11 lr_sql_statement TYPE REF TO cl_sql_statement,
12 lr_sql_result_set TYPE REF TO cl_sql_result_set,
13 lr_sql_exception TYPE REF TO cx_sql_exception,
14 lr_sql_parameter_invalid TYPE REF TO cx_parameter_invalid,
15 lr_parameter_invalid_type TYPE REF TO cx_parameter_invalid_type,
16 lr_salv_exception TYPE REF TO cx_salv_msg,
17 lr_salv_alv TYPE REF TO cl_salv_table,
18 lt_result TYPE STANDARD TABLE OF ty_result,
19 ls_result TYPE ty_result,
20 lr_data TYPE REF TO data,
21 lv_where_clause_statement TYPE string,
22 lv_error_text TYPE string,
23 lv_where_mandt TYPE string,
24 lv_where_spras TYPE string.
25 * Selection screen fields
26 SELECT-OPTIONS : s_matnr FOR ls_result-matnr,
27 s_mtart FOR ls_result-mtart.
28
29 * Connect to dabatabse (HANA or Non-HANA)
30 * 1 Set the database connection
31 PERFORM make_db_connection.
32
33 * Instantiate SQL Statement
34 * i.e Get the SQL Statement reference using the instance of the connection
35 * 2. Instantiate the statement object
36 PERFORM ini_sql_statement.
37
38 * Prepare Native SQL statements
39 * 3. Construct the SQL using Concatenate syntax or string operation
40 PERFORM prepare_native_sql_string.
41
42 * Using the reference of the statement call, the respective methods to execute the query
43 * 4. Issue Native SQL Call
44 PERFORM issue_native_sql_call.
45
46 * Get the result of the query in a table
47 * 5. Assign Target variable for result set
48 PERFORM assign_target_result.
49
50 * 6. Retrieve Result set
51 PERFORM retrieve_complete_result_set.
52
53 * 7. Close the query, release resource
54 PERFORM close_query.
55
56 * 8. Close DB Connection
57 PERFORM close_db_connection.
58
59 * 9. Display output
60 PERFORM display_result.
61 **&---------------------------------------------------------------------*
62 **& Sub Routines
63 **&---------------------------------------------------------------------*
64 *&---------------------------------------------------------------------*
65 *& Form MAKE_DB_CONNECTION
66 *&---------------------------------------------------------------------*
67 * Connect to database
68 *----------------------------------------------------------------------*
69 FORM make_db_connection .
70
71 TRY.
72 * Get the DB (HANA/Non HANA) Connection
73 * If we do not pass the DB name, it would pull the default database
74 lr_sql_connection ?= cl_sql_connection=>get_connection( ).
75
76 * 10. Catch errors/exceptions (if any)
77 CATCH cx_parameter_invalid_type INTO lr_parameter_invalid_type.
78 lv_error_text = lr_parameter_invalid_type->get_text( ).
79 MESSAGE e000 WITH lv_error_text.
80
81 CATCH cx_parameter_invalid INTO lr_sql_parameter_invalid.
82 lv_error_text = lr_sql_parameter_invalid->get_text( ).
83 MESSAGE e001 WITH lv_error_text.
84
85 CATCH cx_sql_exception INTO lr_sql_exception.
86 lv_error_text = lr_sql_exception->get_text( ).
87 MESSAGE e001 WITH lv_error_text.
88
89 CATCH cx_salv_msg INTO lr_salv_exception.
90 lv_error_text = lr_salv_exception->get_text( ).
91 MESSAGE e001 WITH lv_error_text.
92
93 ENDTRY.
94
95 ENDFORM.
96 *&---------------------------------------------------------------------*
97 *& Form INI_SQL_STATEMENT
98 *&---------------------------------------------------------------------*
99 * Instantiate the statement object
100 *----------------------------------------------------------------------*
101 FORM ini_sql_statement .
102 IF lr_sql_connection IS BOUND.
103
104 TRY.
105
106 * Get the SQL Statement reference using the instance of the connection
107 CREATE OBJECT lr_sql_statement
108 EXPORTING
109 con_ref = lr_sql_connection. " Database Connection
110
111 * 10. Catch errors/exceptions (if any)
112 CATCH cx_parameter_invalid_type INTO lr_parameter_invalid_type.
113 lv_error_text = lr_parameter_invalid_type->get_text( ).
114 MESSAGE e000 WITH lv_error_text.
115
116 CATCH cx_parameter_invalid INTO lr_sql_parameter_invalid.
117 lv_error_text = lr_sql_parameter_invalid->get_text( ).
118 MESSAGE e001 WITH lv_error_text.
119
120 CATCH cx_sql_exception INTO lr_sql_exception.
121 lv_error_text = lr_sql_exception->get_text( ).
122 MESSAGE e001 WITH lv_error_text.
123
124 CATCH cx_salv_msg INTO lr_salv_exception.
125 lv_error_text = lr_salv_exception->get_text( ).
126 MESSAGE e001 WITH lv_error_text.
127
128 ENDTRY.
129
130 IF lr_sql_connection IS NOT BOUND.
131 MESSAGE 'No reference to SQL Statements made' TYPE 'I'.
132 LEAVE LIST-PROCESSING.
133 ENDIF.
134
135 ELSE.
136 MESSAGE 'No connection established' TYPE 'I'.
137 LEAVE LIST-PROCESSING.
138 ENDIF.
139 ENDFORM.
140 *&---------------------------------------------------------------------*
141 *& Form PREPARE_NATIVE_SQL_STRING
142 *&---------------------------------------------------------------------*
143 * Construct the SQL using Concatenate syntax or string operation
144 *----------------------------------------------------------------------*
145 FORM prepare_native_sql_string .
146
147 * In line data declaration and converting selection option to a where clause string for S_MATNR
148 DATA(lr_seltab) = cl_lib_seltab=>new( it_sel = s_matnr[] ).
149 DATA(lv_where_clause_sel) = lr_seltab->sql_where_condition( iv_field = 'M.MATNR' ).
150
151 * In line data declaration and converting selection option to a where clause string for S_MTART
152 DATA(lr_seltab2) = cl_lib_seltab=>new( it_sel = s_mtart[] ).
153 DATA(lv_where_clause_sel2) = lr_seltab2->sql_where_condition( iv_field = 'M.MTART' ).
154
155 *--------------------------------------------------------------------*
156 * Begin of script for HANA Database
157 *--------------------------------------------------------------------*
158 * Construct the SQL in SQL Console Eclipse and put it in a string ( Native SQL Only )
159 * Modern sysntax for concatenation
160 lv_where_clause_statement = | SELECT M.MATNR, M.MTART, T.MAKTX |
161 && | FROM MARA AS M INNER JOIN MAKT AS T |
162 && | ON M.MATNR = T.MATNR |
163 && | WHERE M.MANDT = '{ sy-mandt }' |
164 && | AND T.SPRAS = '{ sy-langu }' |
165 && | AND { lv_where_clause_sel } |
166 && | AND { lv_where_clause_sel2 } |
167 && | ORDER BY M.MATNR |.
168 *--------------------------------------------------------------------*
169 * End of script for HANA Database
170 *--------------------------------------------------------------------*
171
172 ** Modern sysntax for Concatenation
173 * lv_where_mandt = |'| && |{ sy-mandt }| && |'|.
174 * lv_where_spras = |'| && |{ sy-langu }| && |'|.
175 *
176 * lv_where_mandt = |M.MANDT = | && | { lv_where_mandt }|.
177 * lv_where_spras = |T.SPRAS = | && | { lv_where_spras }|.
178 *
179 **--------------------------------------------------------------------*
180 ** Begin of script for ORACLE Database
181 **--------------------------------------------------------------------*
182 ** Construct the SQL in SQL Console Eclipse and put it in a string ( Native SQL Only )
183 * lv_where_clause_statement = | SELECT M.MATNR, M.MTART, T.MAKTX |
184 * && | FROM MARA M, MAKT T |
185 * && | WHERE M.MATNR = T.MATNR |
186 * && | AND { lv_where_mandt } |
187 * && | AND { lv_where_spras } |
188 * && | AND { lv_where_clause_sel } |
189 * && | AND { lv_where_clause_sel2 } |.
190 **--------------------------------------------------------------------*
191 ** End of script for ORACLE Database
192 **--------------------------------------------------------------------*
193
194 * If you find difficulty in understanding above concatenate/string operation,
195 * Then check below. It does the same thing as above.
196 * CONCATENATE '''' sy-mandt '''' INTO lv_where_mandt.
197 * CONCATENATE '''' sy-langu '''' INTO lv_where_spras.
198 *
199 * CONCATENATE 'M.MANDT = ' lv_where_mandt INTO lv_where_mandt SEPARATED BY space.
200 * CONCATENATE 'T.SPRAS = ' lv_where_spras INTO lv_where_spras SEPARATED BY space.
201 *
202 * construct the sql in sql command editor in dbacockpit
203 * below sql works for oracle database
204 * concatenate 'SELECT M.MATNR, M.MTART, T.MAKTX'
205 * 'FROM MARA M, MAKT T'
206 * 'WHERE M.MATNR = T.MATNR'
207 * 'AND' lv_where_mandt
208 * 'AND' lv_where_spras
209 * 'and' lv_where_clause_sel
210 * 'and' lv_where_clause_sel2
211 * into lv_where_clause_statement separated by space.
212
213 ENDFORM.
214 *&---------------------------------------------------------------------*
215 *& Form ISSUE_NATIVE_SQL_CALL
216 *&---------------------------------------------------------------------*
217 * Issue Native SQL Call
218 *----------------------------------------------------------------------*
219 FORM issue_native_sql_call .
220
221 TRY.
222
223 * Using the reference of the statement call the respective methods to execute the query
224 lr_sql_statement->execute_query(
225 EXPORTING
226 statement = lv_where_clause_statement " SELECT Statement Being Executed
227 hold_cursor = space
228 RECEIVING
229 result_set = lr_sql_result_set ). " Database Cursor
230
231 * 10. Catch errors/exceptions (if any)
232 CATCH cx_parameter_invalid_type INTO lr_parameter_invalid_type.
233 lv_error_text = lr_parameter_invalid_type->get_text( ).
234 MESSAGE e000 WITH lv_error_text.
235
236 CATCH cx_parameter_invalid INTO lr_sql_parameter_invalid.
237 lv_error_text = lr_sql_parameter_invalid->get_text( ).
238 MESSAGE e001 WITH lv_error_text.
239
240 CATCH cx_sql_exception INTO lr_sql_exception.
241 lv_error_text = lr_sql_exception->get_text( ).
242 MESSAGE e001 WITH lv_error_text.
243
244 CATCH cx_salv_msg INTO lr_salv_exception.
245 lv_error_text = lr_salv_exception->get_text( ).
246 MESSAGE e001 WITH lv_error_text.
247
248 ENDTRY.
249
250 ENDFORM.
251 *&---------------------------------------------------------------------*
252 *& Form ASSIGN_TARGET_RESULT
253 *&---------------------------------------------------------------------*
254 * Assign Target variable for result set
255 *----------------------------------------------------------------------*
256 FORM assign_target_result .
257
258 TRY.
259
260 * Get the result of the query in a table
261 GET REFERENCE OF lt_result INTO lr_data.
262 lr_sql_result_set->set_param_table(
263 EXPORTING
264 itab_ref = lr_data ). " Reference to Output Variable
265
266 * 10. Catch errors/exceptions (if any)
267 CATCH cx_parameter_invalid_type INTO lr_parameter_invalid_type.
268 lv_error_text = lr_parameter_invalid_type->get_text( ).
269 MESSAGE e000 WITH lv_error_text.
270
271 CATCH cx_parameter_invalid INTO lr_sql_parameter_invalid.
272 lv_error_text = lr_sql_parameter_invalid->get_text( ).
273 MESSAGE e001 WITH lv_error_text.
274
275 CATCH cx_sql_exception INTO lr_sql_exception.
276 lv_error_text = lr_sql_exception->get_text( ).
277 MESSAGE e001 WITH lv_error_text.
278
279 CATCH cx_salv_msg INTO lr_salv_exception.
280 lv_error_text = lr_salv_exception->get_text( ).
281 MESSAGE e001 WITH lv_error_text.
282
283 ENDTRY.
284 ENDFORM.
285 *&---------------------------------------------------------------------*
286 *& Form RETRIEVE_COMPLETE_RESULT_SET
287 *&---------------------------------------------------------------------*
288 * Retrieve Result set
289 *----------------------------------------------------------------------*
290 FORM retrieve_complete_result_set .
291
292 TRY.
293
294 lr_sql_result_set->next_package( ).
295
296 CATCH cx_parameter_invalid_type INTO lr_parameter_invalid_type.
297 lv_error_text = lr_parameter_invalid_type->get_text( ).
298 MESSAGE e000 WITH lv_error_text.
299
300 CATCH cx_parameter_invalid INTO lr_sql_parameter_invalid.
301 lv_error_text = lr_sql_parameter_invalid->get_text( ).
302 MESSAGE e001 WITH lv_error_text.
303
304 CATCH cx_sql_exception INTO lr_sql_exception.
305 lv_error_text = lr_sql_exception->get_text( ).
306 MESSAGE e001 WITH lv_error_text.
307
308 CATCH cx_salv_msg INTO lr_salv_exception.
309 lv_error_text = lr_salv_exception->get_text( ).
310 MESSAGE e001 WITH lv_error_text.
311
312 ENDTRY.
313
314 ENDFORM.
315 *&---------------------------------------------------------------------*
316 *& Form CLOSE_QUERY
317 *&---------------------------------------------------------------------*
318 * Close the query, release resources
319 *----------------------------------------------------------------------*
320 FORM close_query .
321
322 lr_sql_result_set->close( ).
323
324 ENDFORM.
325 *&---------------------------------------------------------------------*
326 *& Form CLOSE_DB_CONNECTION
327 *&---------------------------------------------------------------------*
328 * Close DB connection
329 *----------------------------------------------------------------------*
330 FORM close_db_connection .
331
332 lr_sql_connection->close( ).
333
334 ENDFORM.
335 *&---------------------------------------------------------------------*
336 *& Form DISPLAY_RESULT
337 *&---------------------------------------------------------------------*
338 * Display ALV
339 *----------------------------------------------------------------------*
340 FORM display_result .
341
342 * Display the data in an ALV
343 cl_salv_table=>factory(
344 IMPORTING
345 r_salv_table = lr_salv_alv " Basic Class Simple ALV Tables
346 CHANGING
347 t_table = lt_result ).
348
349 * Show the output
350 lr_salv_alv->display( ).
351
352 ENDFORM.

Let us check the output for HANA database users.


For other Database users, your first ADBC program is the same as above with little change.
Native SQL is not platform independent. In order to make the native SQLcompatible with
ORACLE database, just comment the code in between below two tags for HANA.
**
* Begin of script for HANA Database
**
**
* End of script for HANA Database
**

And uncomment the tags in between the below two tags for ORACLE database.
**
* Begin of script for ORACLE Database
**
**
* End of script for ORACLE Database
**

Program to demonstrate ADBC using non-HANA (Oracle) database. ADBC Usage for Oracle DB

The code is in subroutine PREPARE_NATIVE_SQL_STRING in the above code snippet.

If the native SQL is not prepared correctly, we get errors like show here.

In debug mode we can verify that it is connected to ORACLE system.


Let us check the output for the same program with ORACLE database users.

DBACOCKPIT
If you are in HANA database, you can easily check the syntax of native SQL in SQL editor at
HANA Studio. But if you do not have HANA database, you can check the native SQL of your
database using t-code DBACOCKPIT. Just follow the path shown in below image. Execute or
hit F8 and if there is any issue in the SQL, you can easily find them in the error/message log
window at the bottom.
Check the native SQL for ORACLE database. The JOIN statement for ORACLE is different.
There is no explicit JOIN command. Two tables to be joined are separated by
comma. . I had to waste few hours just to figure this out (as I have no ORACLE SQL

experience) :). Also, check fields selected are separated by comma and there is no Tilda (as
in open SQL joins).
Have questions about HANA? Check SAP HANA from Space Level.

Some frequently asked questions on ADBC.

1. If a table resides both at Data Dictionary and Database. Does it make sense to use native
SQL and/or ADBC so that the table is encountered at the database level itself?
Answer: SAP/HANA experts say that if the table resides both at database and SAP data
dictionary, Open SQL should always be the first choice. Open SQL is optimized for
communication with the database. If someone tries to be adventurous by using native SQL or
ADBC when it is not needed, then it might worsen the performance because of overhead (like
connection, constructor calls, statement class, query etc) in the ADBC framework.

2. If a table resides only in the Database, what should be used? Native SQL by using EXEC SQL
ENDEXEC or by calling ADBC?
Answer: SAP/HANA experts say, ADBC should be the choice in this case (even though EXEC
SQL ENDEXEC would do the same). Not necessarily for any performance advantage but for
the ease of programming, clean OOPs concept, better error handling and modern method.

3. Can we have secondary Database connection from more than one ABAP system to single
HANA database?
Answer: Yes, we can connect to the same secondary HANA Database system from more than
one ABAP system and use Open SQL to query the data. But if we need to make sure all the
custom tables and extensions to the standard table is identical in all ABAP system and HANA
database (i.e. ABAP-based system and DDIC information on DB tables is identical).

For example, a custom table YSAPYARD is defined in ABAP system YARD1 with 10 fields and
the same table YSAPYARD has two extra fields in ABAP system YARD2. But the HANA database
has been updated with only 10 fields. So, if someone does SELECT * from system YARD2
(which as 2 extra fields), then there would be problem as the Database and ABAP system
information are not same.

So if we want to connect to same HANA database from multiple ABAP systems, we need to
take care of such subtle information. In order to make Open SQL work with secondary
database connection, the table reference must exist in the ABAP Data Dictionary and must
match exactly names, data types etc

4. Is database connection from ABAP specific to HANA technology?


Answer: No. ADBC is not a HANA specific technology. It is supported for all ABAP supported
database types/operating system combinations. It can be used for connecting to
ORACLE/MSSQL (Microsoft SQL server) etc from ABAP as long as the ORACLE/MSSQL etc kernel
files are loaded into the ABAP system.

5. What is the syntax to call specific database system?


Answer: lr_sql_connection ?= cl_sql_connection=>get_connection( ORA ).

6. Can ADBC return more than one output tables?


Answer: No. The ADBC interface only allows one parameter table, so we cannot receive more
than one table output from one ADBC call. We need to call ADBC multiple times to return
multiple tables.

AMDP ABAP Managed Database Procedure

AMDP ABAP Managed Database Procedure


ABAP Managed Database Procedures or AMDP. Another jargon. Does it sound bombastic? I was
scared when I heard it for the first time. . But when you ponder a little deeper, the

concept is in the name itself. AMDP is Database Procedure which is Managed by ABAP. It is
not a database thing. It is governed and managed by ABAP. So, ABAPers are bound to love and
use it to the fullest. Like CDS Views, only ABAP transports (ABAP Class/Method) of

AMDP needs to be transported and we need not worry about the corresponding underlying
HANA artifacts. Both CDS and AMDP fall in Top-Down Approach of HANA, which are
recommended by SAP.
We need to be in ABAP system which is on release 7.4 SP05 and higher and HANA is the
primary database. By now you have guessed correctly, AMDP works only with HANA as the
primary database. But AMDP is conceptually designed to work in any database and any
language. This is clear from the way we define the AMDP Method. We need to let the Method
know the database and language. For HANA, the database is HDB and language is SQLScript.
SAP Document says: Currently, AMDP only supports database procedures from the SAP HANA
database. But in principle, however, AMDP is designed so that stored procedures from other
database systems can also be supported.
AMDP can detect Database independent syntax errors; HANA specific syntax error and SQL
Script errors.
Parameters not passed by value, wrong parameter types etc are database independent
issues. Type mapping checks or wrong default values are HANA specific errors.
Did you Read? SAP HANA for Beginners from a Beginner?
Still scared of this AMDP bomb?
Let us make it more simple. All children who wear the same school uniform belong to one
school. One of those students has a special batch on his/her shirt. He is identified as the Head
Boy/Girl. They have access to all rooms/areas like any other students and also they
have special keys with which they can enter the areas/rooms which are prohibited for other
students.

Let us co-relate the above example with SAP ABAP. All students = CLASS. If a Class has
Marker Interface IF_AMDP_MARKER_HDB (student batch) then it is an AMDP class (head
boy/girl). If one or more METHOD of AMDP class has the keyword BY DATABASE
PROCEDURE (special key for head boy/girl), then it is AMDP method.

Thats it. You now know that any class which has a marker interface
IF_AMDP_MARKER_HDB and one of its method has the keyword BY DATABASE PROCEDURE
is an AMDP class. Period!!

Let us check how an AMDP Class and Method looks in the real scenario.

1
2 CLASS zcl_sapyard_po_amdp DEFINITION
3 PUBLIC
4 FINAL
5 CREATE PUBLIC .
6
7 PUBLIC SECTION.
8
9 INTERFACES if_amdp_marker_hdb.
10
11 * TYPEs here
12 TYPES: BEGIN OF lty_po_data,
13 * field1,
14 * field2,
15 END OF lty_po_data.
16
17 * AMDP Method
18 METHODS get_po_data
19 IMPORTING VALUE(ip_client) TYPE mandt
20 VALUE(ip_lifnr) TYPE s_lifnr
21 EXPORTING VALUE(ex_po_data) TYPE lty_po_data.
22
23 * Non AMDP Method
24 METHODS display_po_data
25 IMPORTING ex_po_data TYPE type lty_po_data.
26
27 PROTECTED SECTION.
28
29 PRIVATE SECTION.
30
31 ENDCLASS.
1
2 CLASS zcl_sapyard_po_amdp IMPLEMENTATION.
3
4 * AMDP Method
5 METHOD get_po_data BY DATABASE PROCEDURE
6 FOR HDB
7 LANGUAGE SQLSCRIPT
8 OPTIONS READ-ONLY
9 USING ekko ekpo.
10
11 * Logic to Select/Join/Loop etc to populate ex_po_data
12 * ex_po_data = logic here
13
14 ENDMETHOD.
15
16 * Non-AMDP Method
17 METHOD display_po_data.
18
19 * Logic to display ex_po_data
20 * ALV Call
21
22 ENDMETHOD.
23
24 ENDCLASS.
25
26

Let us join the dots better.

The class zcl_sapyard_po_amdp depicted in the figure below is a global class (can view it in
SE24) and has the interface marker tag: if_amdp_marker_hdb. Theoretically, there can be
more that one if_amdp_marker_XXX tag with suffix XXX indicating the database system
for which the AMDPs (Database Procedures) can be implemented in AMDP methods of the
AMDP class.

Looking at the interface marker tag (last three letters), it makes us believe that AMDP is not
HANA database specific as it has provision to include other databases. But for now, let us
concentrate only for HDB and wait for further releases and documentations from SAP where
they show AMDP for non-HANA. Why will they do that?
In the public section of the class definition add the mandatory
interface if_amdp_marker_hdb. You can have your own data definitions (TYPES,
CONSTANTS, DATA etc.) and Methods as well in this space. But we must have one method
which will be an AMDP Method. This so-called AMDP method can have some importing
parameter(s) and exporting table output(s). But both should be VALUE reference only.

Looking at the Class DEFINITION, we can guess that the method get_po_data can be an
AMDP method as it meets the pre-requisite of passing all parameters by VALUE. But, just by
looking the definition, we cannot say for sure if it really is an AMDP method. However, we can
say for sure that the second method display_po_data is NOT an AMDP method as it does not
meet the basic requirement of passing by VALUE.

To confirm, if the method get_po_data is really an AMDP method, we need to look at the
IMPLEMENTATION. In the implementation, if you find the keyword BY DATABASE
PROCEDURE, it is AMDP method.

Look the figure below for more clarity on what we spoke above.
What is the motivation behind AMDP?
Answer: Stored Procedures have been supported by all databases and they can be called and
created using ABAP code. Native SQL was the method to consume Stored Procedures before
ABAP 7.4. Now we can use ADBC as it has better advantage (OOPs, where-used analysis,
exception handling etc) than direct Native SQL call.

Read more about ADBC ABAP DataBase Connectivity.

ADBC can be Bottom Up and manage the complete lifecycle of the stored procedure outside
the ABAP stack. We need to make sure, the stored procedure is deployed in all database
systems and we need to take care of different ABAP database schema names and systems like
development box, testing box, quality box, pre-production and production system.

ADBC can also be Top Down. Surprised!!! Yes, it can follow Top Down Approach. When we
concatenate the native SQL statements in our own program and call the database and execute
those SQL statements, it is Top Down. This removes the need for handling the database
artifacts in each system of the landscape and all can be handled by the normal
transport. But, do you think creating the complex stored procedure by concatenation strings
in ABAP that easy? You might build native SQL code for simple selects and other normal stuff
and build your program. But complex/actual project requirement is more than just DEMO
program. And most developer (ABAPers like me) are not familiar with native SQL

(and database language) and ADBC still, lacks native SQL check during compile.

So, the motivation is crystal clear. With ADMP, the creation, modification, activation and
transport are all handled at ABAP layer, i.e. stored procedure runtime objects on HDB is
created and managed by AMDP in ABAP AS. Also, SQLScript source code is managed at ABAP
AS by AMDP. SQLScript syntax check also happens in HDB (but not in another database),
unlike ADBC.

What are the restrictions in AMDP Methods?


Answer:
1. RETURNING parameters cannot be used. When you can have
IMPORTING/EXPORTING/CHANGING parameters, who cares for RETURNING parameters.
Right?

2. Parameters have to be passed by VALUE.


3. Parameters can only be either Table or Scalar. That means, only variables, structures and
simple internal tables can be passed. No deep structures, no complex tables (tables in a
table) i.e. no nested tables.
4. If ABAP Dictionary structures are used for typing, the method cannot be implemented as an
AMDP.
5. Whatever ABAP dictionary tables, views, other procedures etc you want to use in AMDP
Method has to be declared while implementing using keyword USING (in the above figure,
EKKO and EKPO are passed).
How are AMDP Methods called?
Answer: AMDP Method call is not special. They are called like any other normal class method.
But AMDP methods are executed like static methods, even though they are defined as
instance methods.

When does AMDP execute in underlying database?


Answer: When AMDP is executed, the ABAP Kernel calls the database procedure in the
underlying database (SAP HANA).

AMDP makes the database procedure available at runtime in the database. Database
procedures are created when they are called by AMDP for the first time. This is call Lazy
Approach. Wiki says: Lazy loading is a design pattern commonly used in computer
programming to defer initialization of an object until the point at which it is needed. JIT.
right? Just In Time.

If we make any change in the source code of database procedure or any dependent objects,
then the new version of the database procedure is created and old versions are deleted
asynchronously (taking its own sweet time :)).

Before we proceed forward, let us refresh our CDS Concept in SAP HANA.

Where are AMDPs created?


Answer: From SAP NetWeaver 7.4 SPS 05 i.e. ABAP release 740 Service Pack Level 05, AMDP
can be created in ABAP in Eclipse (Eclipse based environment i.e. ADT : ABAP Development
Tool). We need to be in ABAP Perspective. We can view the class and methods in SE24 in
ABAP workbench (GUI) but we cannot edit them in GUI. Although AMDPs are created in
Eclipse, they are saved at the ABAP layer. So developers are concerned only with ABAP
artifacts. No need to worry about database artifacts and system handling in different
environments in the same landscape.

AMDPs are defined at ABAP layer but they are dependent on the underlying database so that
they can optimize the database in use at the fullest. As they are database dependent, the
implementation language differs based on the database. SQLScript is the implementation
language for HDB so playing with AMDP in HDB is same as implementing SQLScript in our
ABAP programs. In another database, the implementing language may not be SQLScript.

Check error message which we get when we try to edit AMDP Class.
Do you want an example of Standard SAP AMDP?
Answer: Check the standard class CL_CS_BOM_AMDP provided by SAP.

Go to t-code SE24.

Check the Interface tab. You will find IF_AMDP_MARKER_HDB. Makes the class AMDP.

Check the source code of methods MAT_REVISION_LEVEL_SELECT,


MAT_BOM_CALC_QUANTITY, MAT_DETERMINE_HEADER etc. Keyword BY DATABASE
PROCEDURE FOR HDB and LANGUAGE SQLSCRIPT is waiting for you.
Look at the IMPORTING and EXPORTING parameters. Passed by VALUE.

You might like to refer to this AMDP Class/Methods for some SQLScript, SELECTs, JOINs etc
examples and usage.

Custom AMDP Class and Method and its usage in custom ABAP program

In your Eclipse environment / HANA Studio /ADT, go to ABAP Perspective. From the Menu,
click on ABAP Class.
Provide the package name, Class name you want to create and description. Provide the Class
Definition and Implementation. Do not forget to provide the Marker interface in the Public
section of the Class Definition and the Keywords in the AMDP Method. The below example
shows that both AMDP Method and non-AMDP Method can co-exist in AMDP Class.
Let us check how we can call the custom AMDP Class in our custom ABAP Program.

1
2 REPORT zmm_tcode_role_report NO STANDARD PAGE HEADING
3 LINE-COUNT 132.
4
5 *--------------------------------------------------------------------*
6 * DATA DECLARATION
7 *--------------------------------------------------------------------*
8 * Inline data declaration for the AMDP Class Instance
9 DATA(lr_data) = NEW zcl_user_role_amdp( ).
10
11 *--------------------------------------------------------------------*
12 * SELECTION SCREEN
13 *--------------------------------------------------------------------*
14 SELECTION-SCREEN: BEGIN OF BLOCK block1 WITH FRAME TITLE text-t01.
15 PARAMETERS p_tcode TYPE tcode.
16 SELECTION-SCREEN: END OF BLOCK block1.
17
18 *--------------------------------------------------------------------*
19 * INITIALIZATION.
20 *--------------------------------------------------------------------*
21
22 *--------------------------------------------------------------------*
23 * START-OF-SELECTION.
24 *--------------------------------------------------------------------*
25 START-OF-SELECTION.
26
27 * Calling the AMDP method to get the data
28 CALL METHOD lr_data->get_t_code_role_matrix
29 EXPORTING
30 ip_tcode = p_tcode
31 ip_object = 'S_TCODE'
32 ip_langu = sy-langu
33 ip_line = '00000'
34 IMPORTING
35 ex_it_tcode_role = DATA(it_tcode_role).
36
37 *--------------------------------------------------------------------*
38 * If you are in ABAP 740 and SP 5 and above but still not in HANA,
39 * You can connect from Eclipse/HANA Studio and create AMDP but
40 * cannot execute in database layer. You can try below code for
41 * normal Class-Method call.
42 *--------------------------------------------------------------------*
43 ** Normal method call at AS ABAP Layer
44 * CALL METHOD lr_data->get_t_code_role_matrix_nonamdp
45 * EXPORTING
46 * ip_tcode = p_tcode
47 * ip_object = 'S_TCODE'
48 * ip_langu = sy-langu
49 * ip_line = '00000'
50 * IMPORTING
51 * ex_it_tcode_role = DATA(it_tcode_role).
52 *--------------------------------------------------------------------*
53
54 *--------------------------------------------------------------------*
55 * END-OF-SELECTION.
56 *--------------------------------------------------------------------*
57 END-OF-SELECTION.
58
59 * Publishing the data in an output
60 cl_demo_output=>display_data(
61 EXPORTING
62 value = it_tcode_role
63 name = 'AMDP Usage to display the TCode and Role' ).

Let us test our custom program and AMDP usage.

Provide the T-Code as the Input.

The output shows two Roles. The program uses AMDP Method.

Find the above AMDP Class Method Code Snippet here.

Find the above Custom Program which consumes the AMDP here.

The above program and AMDP class use one Parameter as an input in the selection screen.
Handling of Parameters are easy. In the next post, we would show how we can handle the
SELECT-OPTIONS in AMDP.
What happens if we change the existing AMDP Method name?
Answer: The method name is automatically updated in the Class which we can see in the GUI.

AMDP with SELECT OPTIONS

In the example demonstrated in the earlier article, all the selection screen elements were
PARAMETERS. Using the PARAMETERS in AMDP Method SELECTs were straight forward. Today
we would show how we can pass SELECT OPTIONs of the screen to AMDP Methods and use
them. Please note, we cannot directly pass SELECT options as is it to AMDP Methods. This is
one limitation of AMDP. We need to select the data from the database and then APPLY the
Filter using the function APPLY_FILTER.
Let us hit it hard again. AMDP Class-Methods cannot take SELECT OPTIONS as input.
So SELECT OPTIONS need to be converted to FILTER STRING using some way and
then pass the FILTER STRING as an input PARAMETER of the of the AMDP Method.

The actual syntax to filter the selected data would look like below:

1
2 * Filtration based on Selection screen input
3 ex_it_tcode_role = APPLY_FILTER( :ex_it_tcode_role, :ip_filters );

EX_IT_TCODE_ROLE would have all the data and APPLY_FILTER would keep the subset using
IP_FILTERS value.

How do we pass IP_FILTERS?


Ans: It has to be passed as STRING.

1
2 METHODS get_t_code_role_matrix
3 IMPORTING
4 VALUE(ip_object) TYPE agobject
5 VALUE(ip_langu) TYPE menu_spras
6 VALUE(ip_line) TYPE menu_num_5
7 VALUE(ip_filters) TYPE string " PARAMETER for the SELECT OPTION String
8 EXPORTING
9 VALUE(ex_it_tcode_role) TYPE tt_tcode.

How do we generate the filter string from SELECT OPTIONS?


Ans: You are the programmer, you find your way to generating the filter. It should

act as the WHERE clause. Or like the FILTER using RANGE table.

Do not worry, we would show you an easy way.


If S_TCODE and S_ROLE are two SELECT OPTIONS of a program, then the string for AMDP filter
can be generated using the class CL_SHDB_SELTAB method COMBINE_SELTABS as shown
below.

1
2 DATA(lv_where) = cl_shdb_seltab=>combine_seltabs(
3 it_named_seltabs = VALUE #(
4 ( name = 'TCODE' dref = REF #( s_tcode[] ) )
5 ( name = 'ROLE' dref = REF #( s_role[] ) )
6 ) ).

If the above syntax is little confusing, then check the alternative for the same syntax.

1
2 cl_shdb_seltab=>combine_seltabs(
3 EXPORTING
4 it_named_seltabs = VALUE #(
5 ( name = 'TCODE' dref = REF #( s_tcode[] ) )
6 ( name = 'ROLE' dref = REF #( s_role[] ) )
7)
8 RECEIVING
9 rv_where = DATA(lv_where) ).
Feeling better now?

Add class CL_SHDB_SELTAB method COMBINE_SELTABS on your cheat sheet.

Frequently Asked Question on HANA: SAP HANA for Beginners from a Beginner?

What does the above class method do?


Ans: See it yourself in debug mode.

I am sure by now you are curious to know how we use it in the Program (after all you are a
programmer by heart). .

Real Time working Program to show handling of SELECT OPTION in AMDP:

1
2 *--------------------------------------------------------------------*
3 * Created by: www.sapyard.com
4 * Created on: 29th Nov 2016
5 * Description: This program consumes the AMDP Class/Method and
6 * shows how to send SELECT OPTIONS to AMDP and use
7 * APPLY_FILTER function in AMDP Method.
8 *--------------------------------------------------------------------*
9 REPORT zmm_tcode_role_report NO STANDARD PAGE HEADING
10 LINE-COUNT 132.
11 *--------------------------------------------------------------------*
12 * TABLES
13 *--------------------------------------------------------------------*
14 TABLES: agr_define.
15
16 *--------------------------------------------------------------------*
17 * DATA DECLARATION
18 *--------------------------------------------------------------------*
19 * Inline data declaration for the AMDP Class Instance
20 DATA(lr_data) = NEW zcl_user_role_amdp( ).
21
22 *--------------------------------------------------------------------*
23 * SELECTION SCREEN
24 *--------------------------------------------------------------------*
25 SELECTION-SCREEN: BEGIN OF BLOCK block1 WITH FRAME TITLE text-t01.
26 SELECT-OPTIONS:
27 s_tcode FOR syst-tcode,
28 s_role FOR agr_define-agr_name.
29 SELECTION-SCREEN: END OF BLOCK block1.
30
31 *--------------------------------------------------------------------*
32 * INITIALIZATION.
33 *--------------------------------------------------------------------*
34
35 *--------------------------------------------------------------------*
36 * START-OF-SELECTION.
37 *--------------------------------------------------------------------*
38 START-OF-SELECTION.
39
40 * Build where clause for data fetching
41 * Class-Method to convert the select options to a dynamic where clause which
42 * will be passed to the AMDP for data filteration after data selection
43 DATA(lv_where) = cl_shdb_seltab=>combine_seltabs(
44 it_named_seltabs = VALUE #(
45 ( name = 'TCODE' dref = REF #( s_tcode[] ) )
46 ( name = 'ROLE' dref = REF #( s_role[] ) )
47 ) ).
48
49 * Calling the AMDP method to get the data
50 CALL METHOD lr_data->get_t_code_role_matrix
51 EXPORTING
52 ip_object = 'S_TCODE'
53 ip_langu = sy-langu
54 ip_line = '00000'
55 ip_filters = lv_where
56 IMPORTING
57 ex_it_tcode_role = DATA(it_tcode_role).
58
59 *--------------------------------------------------------------------*
60 * END-OF-SELECTION.
61 *--------------------------------------------------------------------*
62 END-OF-SELECTION.
63
64 * Publishing the data in an output
65 cl_demo_output=>display_data(
66 EXPORTING
67 value = it_tcode_role
68 name = 'AMDP to show APPLY_FILTER function' ).
69 *--------------------------------------------------------------------*

Real AMDP Class Method showing usage of APPLY_FILTER for SELECT OPTIONS:

1
2 CLASS zcl_user_role_amdp DEFINITION
3 PUBLIC
4 FINAL
5 CREATE PUBLIC .
6
7 PUBLIC SECTION.
8
9 INTERFACES if_amdp_marker_hdb.
10
11 TYPES:
12 BEGIN OF ty_tcode,
13 tcode TYPE agval,
14 ttext TYPE ttext_stct,
15 role TYPE agr_name,
16 rtext TYPE agr_title,
17 END OF ty_tcode .
18
19 TYPES:
20 tt_tcode TYPE STANDARD TABLE OF ty_tcode .
21
22 METHODS get_t_code_role_matrix
23 IMPORTING
24 VALUE(ip_object) TYPE agobject
25 VALUE(ip_langu) TYPE menu_spras
26 VALUE(ip_line) TYPE menu_num_5
27 VALUE(ip_filters) TYPE string
28 EXPORTING
29 VALUE(ex_it_tcode_role) TYPE tt_tcode.
30
31 PROTECTED SECTION.
32 PRIVATE SECTION.
33
34 ENDCLASS.
35
36 CLASS zcl_user_role_amdp IMPLEMENTATION.
37
38 METHOD get_t_code_role_matrix
39 BY DATABASE PROCEDURE
40 FOR HDB
41 LANGUAGE SQLSCRIPT
42 OPTIONS READ-ONLY
43 USING agr_1251 tstct agr_texts.
44
45 ex_it_tcode_role = select a.low,
46 b.ttext,
47 a.agr_name,
48 c.text
49 from agr_1251 as a
50 inner join tstct as b on a.low = b.tcode
51 inner join agr_texts as c on a.agr_name = c.agr_name
52 where
53 a.mandt = :ip_client
54 AND a.object = :ip_object
55 AND b.sprsl = :ip_langu
56 AND c.spras = :ip_langu
57 AND C.LINE = :ip_line
58 ORDER BY a.low, a.agr_name;
59
60 * Filtration based on Selection screen input
61 ex_it_tcode_role = APPLY_FILTER( :ex_it_tcode_role, :ip_filters );
62
63 ENDMETHOD.
64
65 ENDCLASS.

Also Read: ADBC ABAP DataBase Connectivity.

Some point for the explorers.

1. If you do not want to use CL_SHDB_SELTAB=>COMBINE_SELTABS to build your Filter String,


you can do it yourself using CONCATENATE function.

1
2 CONSTANTS: lc_augdt TYPE augdt VALUE '00000000'. " Clearing Date
3
4 READ TABLE s_budat INTO lst_budat INDEX 1.
5 IF SY-SUBRC = 0.
6 lv_where_augdt = |AUGDT = '| && |{ lc_augdt }| &&
7 |' OR AUGDT > '| && |{ lst_budat-high }'|.
8 ENDIF.

It is same as ( AUGDT = 00000000 OR AUGDT = 20161129 ).


2. If you think the below syntax to generate the dynamic WHERE CLAUSE string is bit complex,
then try to use the alternative.

1
2 DATA(lv_where) = cl_shdb_seltab=>combine_seltabs(
3 it_named_seltabs = VALUE #(
4 ( name = 'TCODE' dref = REF #( s_tcode[] ) )
5 ( name = 'ROLE' dref = REF #( s_role[] ) )
6 ) ).

This alternative shown below is a lengthy approach but might be simple and easy to
understand for some of us. After all, everyone has the right to be different.

1
2 TRY.
3
4 ** Type declaration for getting the Method's input table type compactibility
5 TYPES:
6 BEGIN OF ty_named_dref,
7 name TYPE string,
8 dref TYPE REF TO data,
9 END OF ty_named_dref,
10
11 lt_named_dref TYPE STANDARD TABLE OF ty_named_dref WITH DEFAULT KEY.
12
13 ** Range Table for Select options
14 TYPES:
15 lt_tcode_range_tab TYPE RANGE OF syst_tcode,
16 lt_role_range_tab TYPE RANGE OF agr_name.
17
18 DATA:
19 ls_named_dref TYPE ty_named_dref,
20 lty_named_dref TYPE lt_named_dref,
21 lv_dref TYPE REF TO data.
22
23 FIELD-SYMBOLS: <fs_range_tab_for_sel_option> TYPE ANY TABLE.
24
25 ls_named_dref-name = 'TCODE'.
26
27 CREATE DATA lv_dref TYPE lt_tcode_range_tab.
28
29 ASSIGN lv_dref->* TO <fs_range_tab_for_sel_option>.
30
31 IF <fs_range_tab_for_sel_option> IS ASSIGNED.
32
33 <fs_range_tab_for_sel_option> = s_tcode[].
34 ls_named_dref-dref = lv_dref.
35 APPEND ls_named_dref TO lty_named_dref.
36
37 ENDIF.
38
39 CLEAR: lv_dref, ls_named_dref.
40 UNASSIGN <fs_range_tab_for_sel_option>.
41
42 ls_named_dref-name = 'ROLE'.
43
44 CREATE DATA lv_dref TYPE lt_role_range_tab.
45
46 ASSIGN lv_dref->* TO <fs_range_tab_for_sel_option>.
47
48 IF <fs_range_tab_for_sel_option> IS ASSIGNED.
49
50 <fs_range_tab_for_sel_option> = s_role[].
51 ls_named_dref-dref = lv_dref.
52 APPEND ls_named_dref TO lty_named_dref.
53 CLEAR ls_named_dref.
54
55 ENDIF.
56
57 * Create the WHERE Clause
58 cl_shdb_seltab=>combine_seltabs(
59 EXPORTING
60 it_named_seltabs = lty_named_dref
61 RECEIVING
62 rv_where = DATA(lv_where) ).
63
64 CATCH cx_shdb_exception. "
65
66 ENDTRY.

Let us see in debug mode, how lty_named_dref look like.


No brainer: Output of lv_where need to be the same.

Huh!! I am sure by now you are convinced that you would rather spend some time
understanding new syntax at the top of this article than writing this bunch of redundant codes
shown above. . Our job was to place all the MENU, it is up to you to decide which

one you like.

Read more: New Age Open SQL ABAP 740

3. If you observe the code for APPLY_FILTER closely, you would notice, filtering is done after
we select a bunch of unwanted data and apply the Filter. Doesnt it impact the performance
negatively?

See, we selected everything here.

1
2 * Populating intermediate internal table
3 ex_it_tcode_role = select a.low,
4 b.ttext,
5 a.agr_name,
6 c.text
7 from agr_1251 as a
8 inner join tstct as b on a.low = b.tcode
9 inner join agr_texts as c on a.agr_name = c.agr_name
10 where
11 a.mandt = :ip_client
12 AND a.object = :ip_object
13 AND b.sprsl = :ip_langu
14 AND c.spras = :ip_langu
15 AND C.LINE = :ip_line
16 ORDER BY a.low, a.agr_name;

Then we applied the Filter.

1
2 * Filtration based on Selection screen input
3 ex_it_tcode_role = APPLY_FILTER( :ex_it_tcode_role, :ip_filters );

Experts suggest wherever possible, we should apply the filter to the DB table directly and
then play around with the resultant data set.

For example, if ip_code_where_clause has S_TCODE select option, then we can directly apply
the filter on the database table AGR_1251.

1
2 it_codes = APPLY_FILTER( agr_1251, :ip_code_where_clause);
Thus, APPLY_FILTER function can be applied to DB Tables and also it can be applied
to Internal Tables.

After the epic is over, let us introduce the main character of our todays story, i.e.
APPLY_FILTER. The function APPLY_FILTER expect two PARAMETERS.

i Dataset (example AGR_1251 (DB table, CDS View); :ex_it_tcode_role (Internal table))
which needs to be filtered.

ii Generated WHERE clause which is passed to AMDP method as String.

4. After going through the above information one would have a doubt. Why did SAP not allow
SELECT OPTIONS to be directly used in AMDP as in normal ABAP?
Ans: We would request experts to provide some explanation to this query.
We feel SAP deliberately chose this path to push down the select option to database
level in accordance with its code to data paradigm shift strategy. AMDPs are executed
directly on the database, hence the select options in the form of filter string would be
executed on the database. On the other hand SELECT OPTION is just an ABAP language
construct which cannot be directly executed on database level

Are Native SQL and Open SQL Competitors?

Are Native and Open SQL competitors? The answer is simple. No. They have their own
identity and usage.
Native Syntax in a nutshell:
ABAPers would not like if someone tells them that they are not the real SQL
developer. After all, ABAPer rarely deal with Native SQL. Native SQL is considered

the real SQL for the database in use.

If you see any code between EXEC SQL ENDEXEC. It is Native SQL syntax.

What are the possible reasons for adopting Native SQL Approach?
Answer:
i) Access tables that are not available on DDIC layer. So, we have not choice but to use Native
SQL.
ii) To use some of the special features supported by DB-Specific SQL, like passing hints to
Oracle Optimizer (for the index which boosts performance) etc.
What are the Pitfalls of Native SQL?
Answer:
i) One of the not so good properties of Native SQL is that they are loosely integrated into
ABAP.
ii) No syntax check at compile time for Native SQL. Statements are directly sent to the
database system. Handle exception CX_SQL_EXCEPTION
iii) No automatic client handling, no table buffering.
iv) All tables, in all schemas, can be accessed.
The above drawbacks mean, Developers are responsible for client handling and accessing
correct schema. Developers need to take care of releasing DB resources, proper locking and
handle the COMMIT efficiently.
Open SQL in a nutshell:
SAP says:
Open SQL consists of a set of ABAP statements that perform operations on the central
database in the SAP Web AS ABAP. It is called Open because it is database
independent. Open = Platform independent.
Open SQL is the only DB abstraction layer with an SQL-like syntax that defines a common
semantic for all SAP-supported databases. Behind the scene, the Kernel programs are busy
converting the Open SQL statement to Native SQL statements for the database in use.
Open SQL can only work with database tables that have been created in the ABAP Dictionary.
Open SQL supports more standard SQL features (SQL92)
i) Some limitations of Open SQL removed starting with ABAP 7.4 SP05.
ii) For SAP HANA and other database platforms.
Open SQL supports Code Push down
i) Push down data intense computations and calculations to the HANA DB layer
ii) Avoid bringing all the data to the ABAP layer.
According to SAP, Code Pushdown begins with Open SQL
i) Use aggregate functions where relevant instead of doing the aggregations in the ABAP layer
ii) Use arithmetic and string expressions within Open SQL statements
iii) Use computed columns in order to push down computations that would otherwise be done
in long loops.
iv) Use CASE and/or IF..ELSE expressions within the Open SQL.
If you have already read the above points somewhere else, then please ignore it. Check the
below tables for a quick comparison of Native and Open SQL. I am sure, you have not seen
such handy tables elsewhere.

Difference between Native SQL and Open SQL


Search:

Seq No Parameters Native SQL Open S

1 Compilation at ABAP layer No Yes

2 Database dependency Yes No

3 Table buffering possible No Yes

4 All Schema Access Yes No

5 Access ABAP Dictionary No Yes

6 Access to ABAP Core Data Services views No Yes


Seq No Parameters Native SQL Open S

7 Conversion of SQL staments to new syntax without any side effect No Yes

8 Possibility of limiting the result set using 'UP TO' statement No Yes

9 "Keep unnecessary load away from DB No Yes

10 Possibility of Secondary Index No Yes

11 Comparitively faster Aggregation and Calculation Yes No

12 Strict Syntax check No Yes

13 Consumption of parameterized CDS views No Yes

14 Mandatory use of EXEC SQL statement Yes No

Similarity between Native SQL and Open SQL


Search:

Seq
Parameters Native SQL Open SQL Comments
No

1 All JOINs availability Yes Yes Left, Right, Inn


Outer Joins

2 String operations availability Yes Yes

3 Arithmatic Expressions Yes Yes

4 Case Expressions Yes Yes

5 Usgae of UNION and UNION ALL Yes Yes

6 Supports Aggregation, Joins And Sub-Queries Yes Yes


Seq
Parameters Native SQL Open SQL Comments
No

7 Code Pushdown Yes- via Database Yes - via


Procedures AMDP

8 Usage of computed columns to avoid loops - eg: Yes Yes


Aggregation and Summation

9 Recommandation of using specific fields then Yes Yes


using SELECT* statement.

If you have never written a Native SQL code before, please refer to the Native SQL Example
Code Snippet below. . Please do not ask, why did we not use Open SQL. This is just

an example, my friend. Ideally, we should not be writing Native SQL for EKPO table which is
available in DDIC layer.

Till I get a real database table example, be happy with this EXEC SQL ENDEXEC statement.

2 TYPES: BEGIN OF ty_ekpo,

3 ebeln TYPE ebeln,

4 ebelp TYPE ebelp,

5 werks TYPE werks_d,

6 END OF ty_ekpo.

8 DATA: wa_ekpo TYPE ty_ekpo,

9 it_ekpo TYPE STANDARD TABLE OF ekpo.

10 PARAMETERS p_werks TYPE werks_d.

11

12 *--------------------------------------------------------------------*

13 * Native SQL Begin


14 *--------------------------------------------------------------------*

15 EXEC SQL PERFORMING loop_and_write_output.

16 SELECT EBELN, EBELP, WERKS

17 INTO :wa_ekpo

18 FROM ekpo

19 WHERE werks = :p_werks

20

21 ENDEXEC.

22 *--------------------------------------------------------------------*

23 * Native SQL End

24 *--------------------------------------------------------------------*

25

26 * Subroutine from Native SQL

27 FORM loop_and_write_output.

28 WRITE: / wa_ekpo-ebeln, wa_ekpo-ebelp, wa_ekpo-werks.

29 ENDFORM.

Let us see some output.


Try to put some wrong syntax between EXEC SQL and ENDEXEC. Syntax checker would not
catch it and your program would activate successfully, but your program might dump. Do it
yourself and have fun.

Open SQL, CDS or AMDP, which Code to Data Technique to use?

Yes, all these are used for fetching data from Database. But we have to use the appropriate
tool based on the requirements. Remember, if a needle (read SQL) can do your job then why
worry about a sword (read CDS / AMDP). Similarly, if the job can be done ONLY by the needle
then you cannot achieve the same result using the sword. After all, Discretion is the better
part of valor.

Below is some basic guideline to determine the most appropriate Code to Data Technique.
CDS Views
i) Only ONE result set can be returned from a CDS View
ii) CDS views amount to least amount of coding with the ability to be reused in multiple
objects across developments. In another word, its a database artifact that can be consumed
across objects/applications
iii) It uses IDA class of SAP GUI hence much faster (kind of paging technique )
Interdependent SQL queries which will be used across applications (programs/object) are the
best example for choosing CDS Views
AMDP
i) Independent SQL Statement those are not often used in other objects
ii) MULTIPLE result sets are needed
iii) Powerful features of native SQL such as currency conversion and CE functions can be
leveraged.
Open SQL
i) If the SQL queries are for the specific object and wont be needed elsewhere (not reusable)
We cannot create an official guide to determine the Order of Preference for Code Push
down. But, practically they can have the below order:

1) Open SQL or CDS views


2) AMDP
What about the order of preference between Open SQL Vs. CDS?
When it comes to reusability, large feature set and domain-specific consumption of data
models, then we should go for ABAP CDS. If CDS Views and SQL can achieve the same
functionality, then go ahead with the CDS View (which is already there in the system).

If a CDS does not exist and you need this SQL only in one program, do not take the hassle of
creating a CDS which would never be used again in another application. Just go ahead and
write your Open SQL.

Please note: Both Open SQL and CDS are OPEN to any underlying database i.e. they are
Platform Independent. And therefore they are the first choice.
Also Read: Are Native SQL and Open SQL Competitors?
If you still doubt our above justification and explanation, then the below Flow Chart from
SAP would help you make the better judgment with confidence.
Before we close, as mentioned in our earlier article New Age Open SQL ABAP 740, at the end
of the day, whichever one works best for your project, team and application, use it. The end
user would not see any difference in usability and result. It is all about maintenance and
knowledge of your technical team members.
Expose CDS Views as OData Service through Annotation

If you are following our series on SAP ABAP on HANA then you would be familiar with CDS
Views. If not, then please check our HANA ABAP Part IV where we introduced Core Data
Services and Part V where we took a Deep Dive into CDS Views. Also, if you have been taking
the advantage of our SAP Netweaver Gateway and OData Services Tutorial series, you would
know that SEGW is the t-code to create OData Projects and eventually publish an OData
Service.
But would you not be surprised if we say, you can create your OData Projects without going
to SEGW transaction? Today, we would show you how you can expose CDS View as OData
Services with just some Annotations (i.e SQL code).
Introduction
We have substantially explored the CDS views with major functionalities. CDS provides
another magical strength to the users to expose the views as OData services. There is a
conventional way to create a service in SEGW importing the view you created.
This article presents a technique to expose a view as a gateway service just by maintaining a
cool annotation. No need to create service through SEGW. Sounds amazing? Lets see how
we can achieve that.
Technical Environment
For CDS views we have used Eclipse Luna.
OData version 2 has been used for gateway application.
Step I :
Create a view with a left outer join between tables VBAP and MARA. We have considered
VBAP as soitem and MARA as prod. A left outer join between two will allow you to select
any fields from these two tables. For simplicity, we took only the fields mentioned in the key.
Fig.1-Create First View

Step II :
Create a second view with Association. Associations in CDS views are more like associations
in Gateways. You create an association to conceptually join or associate one data source to
a target data source on a condition provided. If data sources can be envisaged as Entities of
OData service then associations are joining two entities conceptually.
Fig.2- Create view with Association and OData Annotation

Take special note of the Annotation at the 6th line: @OData.publish: true. This is the magic
spell for our article today.

Step III :
Now our view is ready. With the DDL view we should be able to see data from Header table
VBAK, Item table VBAP and Product table MARA.
Fig.3- DDLS view
Fig.3- Data from the view

Step IV :
Note, once you activate the view you will be able to see an icon beside the annotation (6th
line) @OData.Publish: true which reads that you need to register the service through
/IWFND/MAINT_SERVICE.
Fig.4- OData Exposure in View

Step V :
Now, as instructed go to transaction /IWFND/MAINT_SERVICE in the gateway system
to register the service created through CDS.
Fig.5- Find Service in /IWFND/MAINT_SERIVCE

Step VI :
Once the service is found, click on the service to register and save it in the appropriate
package. Note we have not used SEGW to create any service. This service
got automatically generated due to OData Annotation maintained.

Fig.6- Register Service

Step VII :
Now test your service through /IWFND/GW_CLIENT transaction using proper OData query.
Note, for navigation unlike usual gateway, we are using to_<association name> in the query
to navigate to the second data set. Since we created vbeln as an association condition in our
ZTEST_ASSOC_VIEW2 the value needs to be passed through OData query for data fetching.

Fig.7- Test Gateway Data Fetch


Limitations
Please also be informed that this service can only provide GET operation. No other CRUD
operations can be done with this CDS view OData Exposure.

Usually, CDS views are created for fetching data (GET operations) and therefore even with
the above limitation, this method of exposing CDS views as OData service is very helpful. This
also shows the power of Annotations (New SQL) in the Core Data Services.

HANAtization

HANA has been the buzz word for quite some time and clients, big and small would eventually
move to HANA, tomorrow if not today. So being an ABAPer, we need to be ready to accept
the change and the challenges it would bring. With in-memory computing, the traditional do
and not to do checklist would become redundant. We need to abreast ourselves with the new
checklist. What was not advised in the pre-HANA era, might be the norm now. Technically
there is not much change, but still, ABAPers need to make a conscious effort not to program
in the traditional mindset. If we are not careful, we might not be able to harness the full
power of the speed beast called HANA. Worse, we might even witness the negative speed
impact because of the wrong implementation of custom codes in HANA.

Why SAP HANA? What ABAP developers need to understand and learn?

Gone are the days when an ABAP query would take a long time to execute in SAP due to a
great volume of data and ABAP developers would require to extract these huge volumes of
data from the database into the application layer and the then process this data and do data
manipulation through coding. Developers were given instructions to avoid multiple tables
joins from the database, concentrate on Key fieldswhile data selection and avoid data
calculations especially during select. All data calculations would be done at the application
level in the program after data selection. Sometimes all data could not be selected due to
limit on the volume of data during select and developer would require cursor statement to
break the data volume into different data packages, update the internal table for output
display and then select and process the next volume/package. Also, performance tuning of
the data was a major activity required to minimize the execution time where large data
volume of data was involved.

Now with the change of Traditional SAP Database to SAP HANA, one needs to understand how
the previous ABAP development standards would take a U-turn and many checks followed
previously would be irrelevant now. For this, one needs to understand the basic SAP HANA
architecture for better coding practices.

SAP HANA is an in-memory data platform that can be deployed on premise or on demand and
SAP HANA can make full use of the capabilities of current hardware to increase application
performance, reduce the cost of ownership, and enable new scenarios and applications that
were not previously possible.

So what is the key feature of this HANA Database that brought a change in the coding
paradigm?
Its the columnar database structure and in-memory processing that have changed the
mindset of the basic ABAP coding concept. So now some of the earlier used data selection
standards get changed and some existing becomes more pronounced.

Some of the basic standards to be followed are:

SELECT * should be replaced with select with specific field names ->this was however
applicable earlier for performance tuning, but now with the column based structure,
this becomes more apparent.
The SELECT statement with CHECK should be avoided -> this was, however,
applicable earlier, but now more apparent.
While selecting data, maximum filtering of data should be done in the where
clause. Earlier NE (inequality) was avoided, but now NE filtering is also advised. With
the columnar database, all the columns act as an index, hence no secondary index
creation is required to minimize execution time. Cursor statement is also not
required. Delete after select also becomes redundant since almost all types of
filtering can be done at one go.
Apply all functions like sum, count, average etc in the SELECT itself and group
them using group by.
Instead of sorting after data selection like earlier, use order by the fields required
for sorting.
Condition statements like If, While, Case etc can be applied directly during
SELECT.
Proper joining between tables are required to avoid unnecessary SELECTs and then
Loop and Read table statements.

So basically what the above points imply that maximum selection and calculations can be
done in one go during single SELECT itself instead of SELECT, SELECT *** for all entries,
Loop, calculations like summation, condition like If or Case and Append to internal
table for final display. So now thelines of coding get reduced, but ABAP developers need to
be more vigilant since more ABAP commands are being clubbed into one SELECT. Earlier
each ABAP statement could be debugged to understand the issues or solve the defects. But
now one needs to be more conscious of the commands being used and understand their
implication.

Some more points which are also applicable, maybe to some specific requirement are as
below.

Since now its in-memory database, hence table buffering is not required, implying
BYPASSING BUFFER is irrelevant now.
Database HINTs are to be avoided.
Cluster table is not applicable now. So all previous cluster table like BSEG, MSEG
usage should be avoided and treated as transparent tables.
S/4 HANA brings in new tables and replaces some of the previous tables and made the
tables in each functional area more structured like ACDOCA table in the finance
area. This type of knowledge needs to percolate to the development layer.

Now, when any database moves to a HANA one, one must be curious as to what needs to
be done and checked to HANATIZE the code?
Basically, if the above points are followed and statements changed accordingly, the previous
ABAP coding can be converted to a HANA one.

Some more points which the new technology offers:

SAP introduced new open-SQL statements with the top-down approach. ABAP
developer should learn these practices and implement them during coding with HANA
DB.
Where common logic is applicable in the project deliverables, CDS
views and AMDP procedures should be created instead of the earlier creation of
subroutines in common includes.

Some examples of code changes for HANA database or HANATIZATION:

i. SELECT *:

Before:

After:

ii. SORT:

Before:

After:
iii. DELETE:

Before:

After:

iv. JOIN:

Before:
After:

v. New open-SQL(No data declaration is required): Inline data declaration is so convenient.


These are some of the points which would be used in each and every project. These are like
alphabets of any language. We need to build complex sentences using these alphabets. But
this is not the exhaustive list. Gradually in coming articles, we would try to put forth more
points and checks which we might need to take care. We would also introduce the new tables
which replace the clusters/pool tables in S/4 HANA.

Vous aimerez peut-être aussi