Académique Documents
Professionnel Documents
Culture Documents
Products
Products Industries
Industries Support
Support Training
Training Community
Community Developer
Developer Partner
Partner
About
About
Home / Community / Blogs + Actions
John Appleby
more by this author
SAP HANA
In-Memory Technology | in-memory | sap | sap hana | sapmentor
share
0 share
0 tweet share
0
Follow
I didnt write enough blogs last year, and felt like I abandoned SCN a bit.
Lately a few people have kindly commented that they enjoyed reading my
https://blogs.sap.com/2016/01/20/how-to-reduce-your-hana-database-size-by-30/ 1/12
6/11/2017 How to reduce your HANA database size by 30% | SAP Blogs
content, which is being nicer to me than I deserve. So heres a little gift to the
year off.
This script is only useful if you have a HANA system which was installed with
an older revision (SPS01-07) and you have upgraded it a bunch of times and
its now on a newer release (SPS08-10).
In that scenario, its possibly the most useful thing you will see all year for
a HANA devop. In a productive HANA system we saw disk footprint reduction
from 2.9TB to 1.89TB and in-memory footprint reduction by over 100GB. It will
also substantially decrease startup time, decrease backup time, and increase
performance.
What happens is that HANA chooses the compression type of a column store
object when it creates it, and only occasionally re-evaluates the compression
type. In older databases that have a lot of data loaded since the initial
installation, it can mean that the compression is suboptimal. In addition,
objects can be fragmented and use more disk space than is really required.
This script takes care of all that and cleans up the system. It takes some time
to run (18h in our case).
A few caveats (these are general best practices, but I have to point them out)!
Run this script in a QA system before production, for test purposes and
so you know how long it will take
Run it at a quiet time when data loads are not running
Ensure you have a full backup
Use this script at your own risk, like any DDL statement it could cause
issues
Do not restart HANA during this operation
Complete a full backup after the script, and restart HANA to reclaim
memory
HOW TO RUN:
call
_SYS_BIC.dba_reorg(INSERT_SCHEMA_NAME_HERE);
SELECT TABLE_NAME
FROM M_CS_TABLES
Recompress Tables
BEGIN
END FOR;
END;
Reorg Rowstore
Trigger Rowstore GC
Create Savepoint
https://blogs.sap.com/2016/01/20/how-to-reduce-your-hana-database-size-by-30/ 3/12
6/11/2017 How to reduce your HANA database size by 30% | SAP Blogs
Alert Moderator
16 Comments
You must be Logged on to comment or reply to a post.
Marcel Scherbinek
If I have the possibility to test it I will provide the compression factor as further input.
Thanks.
https://blogs.sap.com/2016/01/20/how-to-reduce-your-hana-database-size-by-30/ 4/12
6/11/2017 How to reduce your HANA database size by 30% | SAP Blogs
Avinash Ganne
Regards,
Avinash
Lars Breddemann
First of all: thanks for sharing your experiences with SAP HANA. They definitively add to
this community, so I am happy to have you blogging here again.
Now to the blog itself. I consider this type of blog selling snake-oil or the magic bullet
(or whatever you want to call the miraculous thing to improve your situation).
While you put the caveats in red letters to provide warning, Id say thats not really
providing enough information on the operations executed by the script.
Running it on a test/QA system will only allow a time estimation if the system nearly has
the same data and similar load during the run.
Since some of the activities will impose locks on various levels (table locks and system
wide savepoint locks) running it during production hours, might end up in a halted
system.
Technically you could actually restart SAP HANA during any of the operations, but that
will of course require extended startup times due to the required recovery.
https://blogs.sap.com/2016/01/20/how-to-reduce-your-hana-database-size-by-30/ 5/12
6/11/2017 How to reduce your HANA database size by 30% | SAP Blogs
For the script itself, in order to run, the user running it must have the UPDATE privilege
on the tables of the underlying schema. This should actually not be the case for normal
administrators.
Of course the script will lead to a reduction of space usage. If it actually reaches 30% on
a regular basis is questionable though.
And since the script just applies a set of operations without actually analysing the current
state of the system it really is a sledgehammer approach and reminds me a lot of the
recommendations for index-rebuilds, defrags and CBO stats-recollections that where/are
so common for e.g. Oracle DBs.
Knowing the effects of such recommendations, I can see another wave of support tickets
running at the colleagues in SAP HANA support
Anyway now that this little loop script has been published users will use it forever and likely
not heed any further warnings (like this current one).
So, just in case anyone uses the script and gets in trouble: I told you so
Jens Gleichmann
Hi Lars,
luckily there are some syntax errors in the script, so in this state it wont
work. Im scared that no one tested it and noticed this. So currently no wave
of support tickets will be knock on the door May be it worked in older
releases but in 90+ no of my system accepted this syntax.
A simple example:
There ist no
=> just a
Another one is inside the stored procedure, but I wont correct it, because I
have the same opinion like Martin and Lars. You should check this
https://blogs.sap.com/2016/01/20/how-to-reduce-your-hana-database-size-by-30/ 6/12
6/11/2017 How to reduce your HANA database size by 30% | SAP Blogs
May be it makes sense to run it after the initial migration to HANA to aim the
best possible compression from beginning. I have corrected the syntax and
tested it on my testsystem and it takes a long time and you never know
when it will finish. Bad in a planned maintenance with a time table.
Regards,
Jens
Michael Healy
I also tested this and yes the syntax was incorrect but choose
not to comment as to leave this thread buried but now it has
resurfaced
Hi to all,
what about the 1813245 SAP HANA DB: Row store reorganization ?
[row_engine]
page_compaction_enable = true
page_compaction_max_pages = 1048576
Are you forcing the REORG even if the result is FALSE? in this note, the info is that, only start a REORG if:
Row store reorganization is recommended, when allocated row store size is over 10GB and free page ratio is over 30%.
Rev.52 or higher
If the result of Reorganization Recommended is TRUE, then row store memory can be reclaimed after row store reorganization.
Thx in advance
https://blogs.sap.com/2016/01/20/how-to-reduce-your-hana-database-size-by-30/ 7/12
6/11/2017 How to reduce your HANA database size by 30% | SAP Blogs
Nuno
Orel Stringa
Hi,
the above should not be news to anyone who is familiar with SAP FAQ notes on
compression and garbage collection (these two notes should be required reading).
It is a well-known fact that compression evaluation and therefore optimization fall behind.
Itd be great if someone from SAP can weigh in and explain why.
It is important to note that the gains from forcing compression optimization are short-
lived in the case of frequently changing tables.
In one of my tests, a frequently changing table of 210 GB table shrunk in size to 160 GB
after forcing compression optimization. However it bounced back to 200+ GB within 5
days.
Thanks,
Orel
Lars Breddemann
The obvious reason for not permanently trying to use the best possible
compression is of course: thats computationally expensive.
It takes time to do that and uses resources that could be used for business
transaction instead.
Also, its not possible upfront to determine if and how large a benefit would
actually turn out to be.
Besides, its not so much the pure change of data volume that leads to
compression methods not being optimal anymore. Its the change of data
distribution in each column of a table and relative to each other that makes
the difference here.
https://blogs.sap.com/2016/01/20/how-to-reduce-your-hana-database-size-by-30/ 8/12
6/11/2017 How to reduce your HANA database size by 30% | SAP Blogs
Martin Frauendorfer
If you want to use a targeted approach to reduce your database footprint (in
terms of memory and disk) rather than reorganizing everything in an
unconditioned manner, you can use mini checks like the following as a
starting point (SAP Note 1999993):
Check ID 565: Tables > 10 Mio. rows and > 200 % UDIV rows
https://blogs.sap.com/2016/01/20/how-to-reduce-your-hana-database-size-by-30/ 9/12
6/11/2017 How to reduce your HANA database size by 30% | SAP Blogs
SAP Note 1999997 provides more advice about reducing the SAP HANA
memory consumption.
Hi to all,
I agree with Martin. The mini-check and the note 1813245 are
a good way to decrease and maintain the DB. The last time
that I used the memory Reorg, I was able to decrease more
than 50%. I will share, next week, my latest tests.
cheers
Nuno
Kasivindhkumar Shanmuganathan
we recently upgraded our ECC / SRM and BI to SP9. though we got very good
compression in ECC (3.9 TB to 2.6TB). For SRM and BI the compression didnt happen at
all. We have not run any script to achieve this in ECC.
BI and SRM have the DB size of 650GB and 500GB respectively where we didnt see
any compression. Any reasons to it.
Hi Kasi,
https://blogs.sap.com/2016/01/20/how-to-reduce-your-hana-database-size-by-30/ 10/12
6/11/2017 How to reduce your HANA database size by 30% | SAP Blogs
The mini-check can help to understand where the problem is or you can run
the HANA_Tables_ColumnStore_TablesWithoutCompressionOptimization
statment (Note 1969700 SQL statement collection for SAP HANA).
cheers
Pavan Gunda
Jens Gleichmann
but this is not related to the script, but to the fact that every time you
execute a ALTER SYSTEM RECLAIM DATAVOLUME DEFRAGMENT
you have to take care of your secondary side.
Regards,
Jens
https://blogs.sap.com/2016/01/20/how-to-reduce-your-hana-database-size-by-30/ 11/12
6/11/2017 How to reduce your HANA database size by 30% | SAP Blogs
https://blogs.sap.com/2016/01/20/how-to-reduce-your-hana-database-size-by-30/ 12/12