Skip to main content
Announcements
Qlik Introduces a New Era of Visualization! READ ALL ABOUT IT
cancel
Showing results for 
Search instead for 
Did you mean: 
mohamed_t
Partner - Contributor II
Partner - Contributor II

Qlik Sense Desktop - Reload suddenly very slow

Hi everyone,

 

I did a windows update yesterday on my laptop. Since this update, the reload in Qlik Sense Desktop is suddenly very slow.

Before the update, the time  reload was between 1 min 15 and 2 min. Now it's around 5 minutes.

I am using Windows 11 and have 16Go Ram.

My client is QLik Sense Desktop May 2022 Patch 5

Anyone else has already encountered a similar issue please?

Thanks ! 

 

 

Labels (1)
1 Solution

Accepted Solutions
lmcsedyz
Partner - Contributor III
Partner - Contributor III

@mohamed_t I solved it!

What has changed:

before upgrade, qlik loaded QVD and did all operation in load statement, multi threaded if possible.

But after upgrade, are all operations processed during load by single core - load from qvd and operations proces on same thread of CPU and operated during! load of each row.

But operations in in resident tables are OK and multi threaded.

DETAILs:

I did tests - only change is the way, how qlik proces the load. Before upgrade, simple operations on fields or in where statement, where processed multi threaded and fast but much higher memory consumption.

Now almost every extra instruction makes the load of QVD megaslow.

Only working cure: Always! do "vanilla" load of QVD. Desired columns from QVD without any transformation or duplicating, creating new fields - nothing.

Only allowed statement in where is "exists".

Then, do your load from this temporary vanilla table, now all your statements and wheres run multi threaded and fast, then drop vanilla temp table. Hint: exists statement in vanilla load is very fast, but in resident load is slow (inner join much faster)! So if you have bigger source QVDs (GBs), do some kind of preload, to define key for exists statement. Then do vanilla load with "exists", all other stuff in resident load and drop vanilla table when not needed anymore.

Old processing was fast but much more RAM hungry, maybe something detecting memory size is wrong after upgrade (win or qlik or both) and qlik changes how it works to make load less memory intensive.

View solution in original post

17 Replies
lmcsedyz
Partner - Contributor III
Partner - Contributor III

I have the same problem after update, but on Qlik sense server - apps that were loading 15min in older versions, load hours now.

Issues are loading bigger QVD files with for example 5 statements in where (all of them simple, no killers like "exists")

mohamed_t
Partner - Contributor II
Partner - Contributor II
Author

Hi @lmcsedyz ,

 

Thanks for replying.

Did you have any idea or solutions  to solve it?

lmcsedyz
Partner - Contributor III
Partner - Contributor III

Hi @mohamed_t 

I tried to remove all "where" conditions to just load, and all operations do after, in resident table - did not help.

I split QVDs to more smaller files - did not help.

I also tried the same with txt files - did not help.

tried combinations - did not help. I am on 16 vcores / 32GB RAM. Before update was everything OK.

I also tried to upgrade to new IR realease may 2023 - did not help.

So, still stuck and this week it is my priority

mohamed_t
Partner - Contributor II
Partner - Contributor II
Author

Hi @lmcsedyz 

 

Ok can you keep me up to date please?

Your issue appears after Qlik update? (for me, I did not update Qlik)

lmcsedyz
Partner - Contributor III
Partner - Contributor III

Hi @mohamed_t , I will tel you know.

It could be windows updates, because before Qlik update, it was necessary to do win updates as well.

What is on the to do list: Try to change server/engine configuration and change data source - for example, copy data to amazon S3 storage - will different destination help?

marcus_sommer

In regard to where-clauses quite the opposite is true and exists() is very fast compared with other types of conditions especially if you could use a single exists() with just a single parameter because such load would be loaded optimized (if no further transformation are applied).

marcus_sommer

Before wondering what might happens to the performance make sure that the data-sets on all stages are the same like before. Already very small changes to the loads or the order of load-tasks and/or the data-interpretation could have a big impact. If for example any filter or join is working like expected and resulting in a significantly bigger data-set and which hits now the available RAM and leads to a swapping with the virtual RAM.

Quite helpful to find the cause would be also to compare the document + task logs from before and now as well as to look on the application/event/performance logs from Qlik and the OS.

lmcsedyz
Partner - Contributor III
Partner - Contributor III

not true @marcus_sommer in our case, if "exists" reaches bigger amount  of data (hundreds of thousands already loaded keys), it becomes extremely slow. And if you have suitable amount of memory (so you have good server and you are nor on the cloud), it is much faster to do inner join after this kind of load. Compromise is simple "where" for load (dates are greater than x, rangesum of few fields is bigger than zero, some flag fields are not null etc) and do inner join after load. It depends on, what kind of operations are done by engine during load and which are applied after load (so rows have been loaded, no matter what is in "where", because it will be evaluated after load of QVD). Solution in this case is in separated QVDs for example for each month: DATA_2023_05.QVD with "exists" and "where" as you want, because it will be not memory intensive and much faster, because release of memory is after each QVD loaded. So if you load from [lib:.... DATA_INVOICES_*.qvd] (qvd);  you are happy and fast. Big QVD is death for qlik sense... you will be faster with more medium sized QVDs, than a single one

marcus_sommer

My experience in this matter is different. And we need to be careful not to mix here different scenarios, like loading from one big file against several sliced ones and the way how these data are filtered and/or transformed.

Loading from a big file is rather seldom the most suitable approach because not all following reports will need all these data and also any (multi-staged) incremental logic must consider all old data. Common are logic which apply rolling n days/months back. Therefore a slicing of the data into YYYYMM and/or Channel/Company/Country or similar is often more useful. Of course this will add some extra overhead because each single load must be initialized by breaking the load-process. By just a few dozens of files it's not very significant but by thousands of files it would have an impact.

Further that slicing could be done smart by including the relevant information within an appropriate folder/file-structure to filter the data already on the file-level and not on the data-level because there is only one exists(KEY) possible to keep the load optimized. This means applying loads per dirlist() and filelist() loops and within them to control the wanted data-areas with variables and/or if-loops.

In regard to the where-clauses it will surely depend to the data-set and the load-order which kind of filtering would be the most fastest one. And of course the lesser distinct keys needs to checked within an exists() the faster would be the load which means that also the load-order has an impact.

Most important is to keep all bigger loads optimized. It may need some extra steps to create appropriate intermediate qvd's and/or to integrate there appropriate Key's (by cleaning/preparing and/or combining fields) and/or to adjust the order of loads and/or adding some extra loads which extract/store the relevant Key's and/or various renaming/dropping statements for tables/fields. It may sound like a lot of work and it is - by doing it afterwards but if all load-stages are in beforehand appropriately designed it requires not much extra efforts.

From this point of view I would be very surprised if where clauses which compares dates/keys >= <= or filters by inner joins are more performant because these actions will be performed on a row-level while an exists(Key) is performed on a column-level.