Hanacleaner Intro PDF
Hanacleaner Intro PDF
SAP Note 2399996 provides a tool that can help with housekeeping tasks
Then the hanacleaner can connect using the info stored in hdbuserstore:
For more about the SAP HANACleaner see SAP Note 2399996
SAP Note 2400024 provides administration suggestions, e.g. recommendations about the hanacleaner
© 2017 SAP SE or an SAP affiliate company. All rights reserved. Customer 2
SAP HANA Housekeeping
HANACleaner – needs privileges
For more about the SAP HANACleaner see SAP Note 2399996
SAP Note 2400024 provides administration suggestions, e.g. recommendations about the hanacleaner
© 2017 SAP SE or an SAP affiliate company. All rights reserved. Customer 3
SAP HANA Housekeeping
HANACleaner – tells missing privileges
E.g. here the user A2 is missing the system privilege CATALOG READ:
E.g. here the user A2 is missing the system privilege TRACE ADMIN:
For more about the SAP HANACleaner see SAP Note 2399996
SAP Note 2400024 provides administration suggestions, e.g. recommendations about the hanacleaner
© 2017 SAP SE or an SAP affiliate company. All rights reserved. Customer 4
SAP HANA Housekeeping
HANACleaner – backupcatalog cleanup (1/2)
For cleaning up the backup catalog (and possibly also backups) hanacleaner
has the following input flags
Flag Unit Details Explanation Default
-be minimum number of retained this number of entries of successful data backups will -1 (not used)
backup entries in the catalog remain in the backup catalog
-bd days minimum retained days of backup the youngest successful data backup entry in the -1 (not used)
entries in the catalog backup catalog that is older than this number of days
is the oldest successful data backup entry not
removed from the backup catalog
-bb true/ switch to delete backups also if set to true the backup files corresponding to the false
false backup entries are also deteleted
-bo true/ output the backup catalog if set to true the backup catalog is printed before and false
false after cleanup
-br true/ output the deleted entries if set to true the deleted backup entries are printed false
false after the cleanup
Example:
Here backup catalog entries (i.e. not the backups themselves) older than 42 days are deleted, but at least 5
backup entries are kept, and the deleted backup entries are printed out
Example:
Here backup catalog entries (i.e. not the backups themselves) older than 30 days are deleted, but at least 5
backup entries are kept, and the deleted backup entries are printed out:
For cleaning up the traces hanacleaner has the following input flags
Flag Unit Details Explanation Default
-tc days minimum retained days for trace trace file content older than this number of days is -1 (not used)
file content removed
-tf days minimum retained days for trace trace files that are older than this number of days are -1 (not used)
files removed
-to true/ output trace files displays trace files before and after the cleanup false
false
-td true/ output the deleted trace files displays the trace files that were deleted false
false
Example:
Here trace file contents older than 42 days is removed and trace files older than 42 days are deleted
Example:
Here trace file contents older than 200 days is removed and trace files older than 200 days are deleted and
the removed tracefiles are displayed:
-dr days retention days for dump files manually created dump files (a.k.a. fullysytem dumps -1 (not used)
and runtime dumps) that are older than this number
of days are removed
Example:
Here dump files older than 1 day are deleted
Any folder with files including any word in their file names can be cleaned:
Flag Unit Details Explanation Default
-gr days retention days files in the directory specified with -gd and with the file names including the -1
for any general word specified with -gw are only saved for this number of days (not used)
file Note: -gd and -gw can also be same length lists with a commas as delimiter
-gd directories a comma separated list with full paths of directories with files to be deleted default ""
according to -gr (entries pairs with entries in -gw) (not used)
-gw filename parts a comma separated list with words that files should have in their names to default ""
be deleted according to -gr (entries pairs with entries in -gd) (not used)
Example:
Here files with CDPOS1 & hansitter_output in their file names, in the folders /tmp/tmp_analysis/ &
/tmp/hanasitter_output older than one day are deleted
For compressing and renaming backup logs and backint logs hanacleaner
has the following input flags
Flag Unit Details Explanation Default
-zb mb backup logs compression size if there are any backup.log or backint.log file that is -1 (not used)
limit bigger than this size limit, then it is compressed and
renamed
-zp zip path specifies the path of the folder (and all subfolders) the directory
where to look for the backup.log and backint.log files specified by
the alias
cdtrace
-zl true/ zip links specifies if symbolic links should be followed false
false searching for backup logs
Example:
Here any backup.log or backint.log found in the trace folder and is larger than 50 MB will be compressed
and renamed:
Example:
Here any backup.log or backint.log found in the trace folder and that is larger than 20 MB will be
compressed and renamed:
And it worked:
For deleting old alerts from the alert table (filled by the statistics service)
hanacleaner has the following input flags
Flag Unit Details Explanation Default
-ar days minimum number retained days minimum retained age of statistics server alerts -1 (not used)
of the alerts
-ao true/ output alerts if true, then all alerts will be displayed before and false
false after the cleanup (if number of alerts are more than
10 thousand, hanacleaner will not do this output)
-ad true/ output deleted alerts if true, then deleted alerts will be displaye after the false
false cleanup (if number of alerts are more than 10
thousand, hanacleaner will not do this output)
Example:
Here alerts older than 5 days are removed from the statistics server alert table:
For reclaiming free log segments hanacleaner has the following input flag
Flag Unit Details Explanation Default
-lr maximum number of free log if there are more free log segments for a service that -1 (not used)
segments per service this number then ALTER SYSTEM RECLAIM LOG
will be executed
Example:
Here the ALTER SYSTEM RECLAIM LOG command is executed since there was a hana process that had
more than one free log segment:
To clear the audit log database table hanacleaner has the following input flag
Flag Unit Details Explanation Default
-ur retention time [days] of the audit if the audit log database table has audit log older -1 (not used)
log table than these number days ALTER SYSTEM CLEAR
AUDIT LOG UNTIL will be executed
Example:
Here the ALTER SYSTEM CLEAR AUDIT LOG UNTIL is executed and 29 entries in the audit log table were
removed:
These entries can be removed by the hanacleaner with following input flag
Flag Unit Details Explanation Default
-kr days min retained unknown object lock min age (today not included) of retained object lock -1 (not used)
days entries with OBJECT_NAME = ‘(unknown)’, see SAP
Note 2147247
Example:
Here all transactional lock history entries with OBJECT_NAME = ‘(unknown)’ are removed:
Object history can be cleaned (as per SAP Note 2479702) using these flags:
Flag Unit Details Explanation Default
-om mb object history table max size if the table _SYS_REPO.OBJECT_HISTORY is -1 (not used)
bigger than this threshold this table will be cleaned
up according to SAP Note 2479702
-oo true/ output cleaned memory from displays how much memory was cleaned up from -1 (not used)
false object table object history table
Example:
In this example there was nothing to clean up from the object history:
Unused space in the disk volumes can be fixed with the flag –fl
Flag Unit Details Explanation Default
-fl % fragmentation limit maximum fragmentation of data volume files, of any -1 (not used)
service, before defragmentation of that service is
started: ALTER SYSTEM RECLAIM DATAVOLUME
'<host>:<port>’ 120 DEFRAGMENT
-fo true/fa output fragmentation displays data volume statistics before and after false
lse defragmentation
Example:
Here defragmentation will be done of all ports if fragmentation is more than 20% for any port:
1. Both following two flags, -cc, and -ce, must be > 0 to control the force compression optimization on tables that never was
compression re-optimized (i.e. last_compressed_record_count = 0):
-cc Max allowed raw If number raw main rows are larger this could be compression -1 (not used)
main records optimized if compressed rows = 0 and -ce indicates it also e.g. 10000000
-ce [GB] Max allowed If estimated size is larger this could be compression optimized if -1 (not used)
estimated size compressed rows = 0 and -cc indicates it also e.g. 1
2. All following three flags, -cr, -cs, and -cd, must be > 0 to control the force compression optimization on tables with
columns with compression type 'DEFAULT' (i.e. no additional compression algorithm in main)
-cr Max allowed rows If a column has more rows and compression = ‘DEFAULT’ this -1 (not used)
table could be re-compressed if -cs and -cd indicate it also e.g. 10000000
-cs [MB] Max allowed size If a column is larger and compression = ‘DEFAULT’ this table -1 (not used)
could be re-compressed if -cr and -cd indicate it also e.g. 500
-cd [%] Min allowed distinct If a column has smaller distinct row quota this table could be re- -1 (not used)
count compressed if -cr and -cs indicate it also e.g. 5
3. Both following two flags, -cq and -cu, must be > 0 to control the force compression optimization on tables whose UDIV
quota is too large, i.e. #UDIVs/(#raw main + #raw delta)
-cq [%] Max allowed UDIV If a column’s UDIV quota is larger this table could be re- -1 (not used)
quota compressed if -cu indicates it also e.g. 150
-cu Max allowed UDIVs If a column has more UDIVs compress if –cq indicates it also -1 (not used)
e.g. 10000000
Some column store tables might have to have its compression re-optimized
This can be atomized with the following flags:
Flag Unit Details Explanation Default
4. Flag -cb must be > 0 to control the force compression optimization on tables with columns with SPARSE (<122.02) or
PREFIXED and a BLOCK index
-cb Max allowed rows If more rows compress if BLOCK and PREFIXED -1 (not used)
e.g. 100000
Following three flags are general; they control all three, 1., 2., 3., and 4. compression optimization possibilities above
-cp [true/false] Per partition Switch to consider above flags per partition false
-cm [true/false] Merge before Switch to perform a delta merge before compression false
-co [true/false] Output Switch to print out tables selected for compression optimization false
Example: Here (1.) tables that were never compressed with more than 10 million raw records and more than 1 GB of
estimated size or (2.) tables with columns only default compressed with more than 10 million rows and size more than 500 MB
or (3.) tables with UDIV quota larger than 150% and more than 10 million UDIVs, will be compression re-optimized:
Events can be acknowledged and handled (in case of unhandled events) with
the following input flags
Flag Unit Details Explanation Default
-eh day minimum retained days for handled events that are older that this number of -1 (not used)
handled events days will be acknowledged and then deleted
-eu day minimum retained days for unhandled events that are older that this number of -1 (not used)
unhandled events days will be handled, acknowledged and then
deleted
Example:
Here handled events older than 5 days and unhandled events older than 34 days were deleted.
It turned out the 113 unhandled events were deleted:
Smart Data Access Virtual Tables can get their statistics created, according to
SAP Note 1872652, with the -vs flag
Flag Unit Details Explanation Default
-vs true / create statistics for virtual tables Switch to create optimization statistics for those false
false virtual tables that are missing statistics (Note: could
cause expensive operations!)
Example:
Here statistics optimization was created for 3 out of 4 virtual tables (the 4th already had statistics):
To remove old inifile content history hanacleaner has the following input flag
Flag Unit Details Explanation Default
-ir days inifile content history retention deletes older inifile content history (should be more -1 (not used)
than 1 year)
Example:
Example:
With these flags it is possible to let hanacleaner print out the housekeeping
tasks without actually executing them (useful for debugging)
Flag Unit Details Explanation Default
-es true/ execute sql Execute all crucial housekeeping tasks (useful to turn True
false off for investigations with –os=true)
-os true/ output sql Prints all crucial housekeeping tasks (useful for False
false debugging with –es=false)
Example:
-op output path full path of the folder where the hanacleaner logs are (not used)
written
-so standard out switch 1: write to std out, 0: do not write to std out 1
Example:
Here a output folder is deleted and then automatically created again by hanacleaner and the daily log file
written into it:
In a MDC system the hanacleaner can clean the SystemDB and multiple
Tenants in one execution
List the DB users for the system and the tenants in hdbuserstore and list them with the –k flag
Flag Unit Details Explanation Default
-k DB user key(s) This is the DB user key saved in the hdbuserstore, it could also be a SYSTEMKEY
list of comma separated userkeys (useful in MDC environments)
Example:
Here two keys are stored; one for SystemDB and one for a Tenant:
Example:
Here trace files older than 42 days are deleted from the SystemDB and from a Tenant:
Example (tries to clean trace files older than 400 days again after 1 day):
The hanacleaner can be scheduled with CRON to do cleanup e.g once per day
Note: hanacleaner expects the environment of <sid>adm if we use CRON the same
environment as <sid>adm has to be provided
Example:
In /etc/passwd it is specified what environment <sid>adm is using, here bash:
This shell script, hanacleaner.sh, provides the <sid>adm environment, with source $HOME/.bashrc
and then executes the hanacleaner command:
Then a new crontab can be created, calling this shell script, e.g. once every night at 1 o'clock:
Note: if you want to log the output to std_out set up the crontab like this: