Skip to main content
Announcements
Join us at Qlik Connect for 3 magical days of learning, networking,and inspiration! REGISTER TODAY and save!
Bastien_Laugiero

Hi everyone,

Starting with Qlik Sense September 2017, we introduced the Qlik Centralize Logging (QLogs) database. This database is managed by the Qlik Logging Service.

Databases still need space though, so deployments that were using the QLogs database for an extended period already, or larger and active deployments, may find themselves accumulating big chunks of data. At this stage, your QLogs database might be consuming several GB of disk space.

This situation can be a major problem for your System Administrator. But don’t worry. There are ways around this (housekeeping), and I’ll be going over what to do to prevent this from happening in the first place, as well as how to slim down an existing database.

Here is what we will cover today

  1. Introduction
  2. Am I using Qlik Centralize logging?
  3. How much space does it take on disk?
  4. How can I better control the usage on disk?
  5. Help! What to do if it consumes all or almost all the available disk space?

 

Introduction

The Qlik Logging Service centralizes the logging by collecting all the messages and inserting them into a PostgreSQL database named QLogs.

In addition to collecting the Qlik Sense Services logs, it also consumes the Windows Event logs and the Performance Counters, such as CPU, Memory, Disk latency and queue, Network, etc

Right now, it is most commonly used as a default data source for the Monitoring Applications reload. If the Centralize Logging is not enabled, the monitoring applications will use the log files as a data source. More details to this are available in my previous post All you need to know about deploying the Qlik Sense Monitoring Applications.

 

Am I using the DB or flat files?

To verify if the Qlik Centralize Logging is enabled in your environment, you can run the following command from the central node.

Microsoft Windows [Version 10.0.17134.619]
(c) 2018 Microsoft Corporation. All rights reserved.

C:\Windows\system32> cd C:\Program Files\Qlik\Sense\Logging

C:\Program Files\Qlik\Sense\Logging> Qlik.Logging.Service.exe validate
        archive_age_in_hours: 720
        last_forced_db_trim: 2019-04-02 16:32:52.107208+02
        purge_age_in_hours: 1440
        version: 0.5

        CentralizedLoggingEnabled: True
        LoggingToFileEnabled: True
        MaximumDatabaseSizeInGB: Max db size not enforced
        DB Host: QlikServer1.domain.local
        DB Name: QLogs
        DB Writer User: qlogs_writer
        DB Reader User: qlogs_reader
        DB Port: 4432

Reference and additional information available in knowledge article: How To Verify Qlik Sense Is Running With Centralized Logging Service

If you wish to set up the Qlik Centralize Logging, you can refer to this help.qlik.com - Qlik Logging Service.

 

How much space does it take on disk?

To check how much disk space each database consumes, you can use the following SQL Request.

I usually use PGAdmin4 to run this request. You simply need to right-click on one of the databases and select Query Tool

SELECT d.datname AS Name,  pg_catalog.pg_get_userbyid(d.datdba) AS Owner,
    CASE WHEN pg_catalog.has_database_privilege(d.datname, 'CONNECT')
        THEN pg_catalog.pg_size_pretty(pg_catalog.pg_database_size(d.datname))
        ELSE 'No Access'
    END AS SIZE
FROM pg_catalog.pg_database d
    ORDER BY
    CASE WHEN pg_catalog.has_database_privilege(d.datname, 'CONNECT')
        THEN pg_catalog.pg_database_size(d.datname)
        ELSE NULL
    END DESC -- nulls first
    LIMIT 20

 Reference: https://wiki.postgresql.org/wiki/Disk_Usage

To give you an example: below is the output from a single node environment without any usage, DEBUG log level enabled on all services and collected over about a week.

        name	     	|          owner        	|       size
---------------------------------------------------------------------------
 Qlogs			|     	qlogs_writer		|	548 MB
 QSR   		 	|    	postgres		|	40 MB
 SenseService           | 	qliksenserepository	|	7223 kB
 QSMQ			| 	qliksenserepository	|	7183 kB
 postgres		| 	postgres		|	7055 kB
 template0              | 	postgres		|	6945 kB
 template1              | 	postgres		|	6945 bytes

It is also possible to narrow down to which node(s) or even service(s) is logging the highest number of entries using the following request.

SELECT *
FROM (
   SELECT d.process_host,d.process_name,d.entry_level,COUNT(id) AS message_volume
   FROM (
       SELECT id,process_host,process_name,entry_level
       FROM public.log_entries
       UNION ALL
       SELECT id,process_host,process_name,entry_level
       FROM public.archive_entries
   ) AS d
   GROUP BY d.process_host, d.process_name, d.entry_level
) AS d
ORDER BY d.message_volume DESC;

Output:

  Process_host 	|  Process_name	|   entry_level	|   message_volume
---------------------------------------------------------------------------
 QlikServer1	|  proxy	|     DEBUG	|	380020
 QlikServer1	|  proxy	|     INFO	|	221839
 QlikServer2	|  proxy	|     INFO  	|	103589
 QlikServer1	|  repository	|     INFO	|	98254 
 QlikServer2	|  scheduler	|     INFO	|	54269
 QlikServer1	|  repository	|     ERROR	|	35874
 QlikServer1  	|  repository	|     WARN	|	34568

 

How can I better control the usage on disk?

  1. Logging Level

To make certain the QLogs database does not start consuming large amounts of disk space, you need first to control and limit the data you want to be logged.

 In order to not create unnecessary traffic, and to limit the amount of data logged in the database, we recommend to not set the logging level of any services to DEBUG for an extensive period of time.

DEBUG level is mostly used while troubleshooting an issue and should be set back to INFO directly after that.

  1. Purge and limit

It is possible to purge the logs after a certain amount of days or limit the size of the database

Purge: (Where x and y are a day’s threshold and archive_age value must be lower than purge_age, otherwise it will fail)

Microsoft Windows [Version 10.0.17134.619]
(c) 2018 Microsoft Corporation. All rights reserved.

C:\Windows\system32> cd C:\Program Files\Qlik\Sense\Logging

C:\Program Files\Qlik\Sense\Logging> Qlik.Logging.Service.exe update --archive_age x --purge_age y

Reference Knowledge article: Qlik Sense - How to control the size of the Qlik Sense Logging Database when using Centralized Loggi...

Limit: (Where n is the size in GB)

Microsoft Windows [Version 10.0.17134.619]
(c) 2018 Microsoft Corporation. All rights reserved.

C:\Windows\system32> cd C:\Program Files\Qlik\Sense\Logging

C:\Program Files\Qlik\Sense\Logging> Qlik.Logging.Service.exe update --maximum_db_size_in_gb n

Reference Knowledge article: Qlik Sense - How to control the size of the Qlik Sense Logging Database when using Centralized Loggi...

 

Help! What to do if it consumes all or almost all the available disk space?

First of all, you can take the steps mentioned above to control and limit the amount of information logged into the database.

But the process to purge or reduce the size of the database can take a long time, so if you need a quick (but short-term) solution, you can temporarily increase the size of the disk while the purging process is working.

If this is not possible you can trigger a manual archiving and then purge of the logs in the QLogs database. (Where X is in hours)

Microsoft Windows [Version 10.0.17134.619]
(c) 2018 Microsoft Corporation. All rights reserved.

C:\Windows\system32> cd C:\Program Files\Qlik\Sense\Logging

C:\Program Files\Qlik\Sense\Logging> Qlik.Logging.Service.exe archive –cutoff_in_hours X

C:\Program Files\Qlik\Sense\Logging> Qlik.Logging.Service.exe purge --cutoff_in_hours X

Reference: Qlik logging service database urgently needs to be reduced in size in by going to help.qlik.com :

If presented with a message stating the system is too busy, retry the command.

If the second try generates the same message, try again at a less busy time on the server.

 

Thank you for reading my blog post and I really hope it helps you.

I’ll be watching for your questions, feedback or suggestions so please comment and share!

 

18 Comments
antoinelaviron
Partner - Contributor III
Partner - Contributor III

Hi Bastien

Thank you for your post on this subject.

I have a suggestion regarding archive/purge functions : it would be nice to apply archive/purge parameters to the backuped QLogs tables in some way.  The problem described below  could have been avoided if that was the case.

2 months ago, on a Friday evening, I had a server which ran out of disk space. It was a Sept'17 QS server which had been regularly upgraded until Nov'18 release.

I used the commands archive, purge and set the size limit of the db but the db size was not reduced and I was still getting the disk space alerts...

I then installed pgAdmin, connected to QLogs database and I discovered that instead of 3 tables (log_entries, archive_entries and settings), there was old versions of those tables, from old version of the product. And those tables were taking a lot of space!QLogs_size.png

 

Looking at the code of the functions purge_archive_entries and archive_log_entries, it appeared that they impact only archive_entries and log_entries tables and not the old ones.

Those old tables were not used by any monitoring applications, and did not had any dependencies so i ended up by deleting them and my the disk space problem was solved.

I found it less extreme than dropping the db, as written here https://support.qlik.com/articles/000053265

Should this method not be possible, the database could be entirely dropped. This will result in loss of all logging data, but as this database is for centralized logging only, will not affect Qlik Sense itself. We do not recommend this unless the supported method is unsuccessful and the purge is critical. 

 

Thank you for reading me

antoine

5,620 Views
Bastien_Laugiero

Hello,

Thank you for this great comment! 

Indeed, there was a defect in the centralized logging causing the migration to fail and therefore not moving the data from the old tables to the new ones.

The defect ID is QLIK-91418 and is resolved in Qlik Sense February 2019.

But if you already had these tables (archive_entries_x_x_x, log_entries_x_x_x,etc) present prior to Qlik Sense February 2019 then you can use the following script to do the migration manually (Basically moving the data from the old tables to the new ones and finally remove the old tables). 

That way the purge process will be able to pick it up. 

Note: You will need to change in the script the name of the tables to correspond to the ones you have in your environment. 

INSERT INTO public.log_entries(entry_timestamp, entry_level, thread, logger, process_host, process_name, process_id, payload)
	SELECT
		e.entry_timestamp
		,e.entry_level
		,e.thread
		,e.logger
		,e.process_host
		,e.process_name
		,e.process_id
		,e.payload
	FROM public.log_entries_0_3 e;
DROP TABLE public.log_entries_0_3;
INSERT INTO public.log_entries(entry_timestamp, entry_level, thread, logger, process_host, process_name, process_id, payload)
	SELECT
		e.entry_timestamp
		,e.entry_level
		,e.thread
		,e.logger
		,e.process_host
		,e.process_name
		,e.process_id
		,e.payload
	FROM public.archive_entries_0_3 e;
DROP TABLE public.archive_entries_0_3;
DROP TABLE public.settings_0_3;

 Hope this help! 

Kind regards, 

Bastien Laugiero

5,539 Views
Huiying
Partner - Creator III
Partner - Creator III
Thank you Bastien for this article. It is very useful! What is the disadvantage not using centralized logging? And does it exist in cloud solution? Best regards, Susan
0 Likes
4,958 Views
Andrew_Delaney
Support
Support

If you do not use the centralised logging service:

  • Configuring the monitoring apps to work in a multinode environment is a little bit more complicated and requires sharing an extra folder on each machine
  • The default monitoring apps will check if the database is present, resulting in postgresql login error events every time they reload

Qlik Sense on Kubernetes uses a completely different logging system, it is possible to set up a centralised view of these logs and instructions are provided on the help site

4,944 Views
Huiying
Partner - Creator III
Partner - Creator III
Thank you Andrew! Can you help with below error? I got it when trying to practice the code listed in the article to see which node or service is logging the highest number of entries. ERROR: relation "public.log_entries" does not exist LINE 6: FROM public.log_entries ^ Best regards, Susan
0 Likes
4,928 Views
Huiying
Partner - Creator III
Partner - Creator III
By the way, currently in our single node 2019June QS server, we didn't use centralized logging, and I do see the Qlog error when reloading monitoring apps. But the reload is still successful. Does it mean we can ignor the error?
4,923 Views
rzenere_avvale
Partner - Specialist II
Partner - Specialist II

Hi @Huiying,

the mentioned check is performed every time the monitoring apps are reloaded.
If the centralized logging is not found, then the error is shown but the reload proceeds loading only the 'regular' logs.
To avoid the check, you can modify the variable db_v_file_override, which is set at the beginning of most monitoring apps, from 0 to 1 in order to read only 'regular' logs (there is a legend right above the set of the variable, explaining the values)

Riccardo

2,965 Views
Huiying
Partner - Creator III
Partner - Creator III
Hi Riccardo That helps! thank you! 🙂 Best regards, Susan
0 Likes
2,942 Views
royceduya
Contributor III
Contributor III

How about if the QLogs table are not being updated and oldest data is only 2 months.

I am looking on view_session_engine.

0 Likes
2,117 Views
jpjust
Specialist
Specialist

Thank you Bastien for the article, it is really helpful.

I have sizes as below. I am not sure if this is considered to be safe OR do I need any clean up?

Postgres.jpg

1,452 Views