Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Qlik Open Lakehouse is Now Generally Available! Discover the key highlights and partner resources here.
cancel
Showing results for 
Search instead for 
Did you mean: 
noviceneil
Partner - Contributor III
Partner - Contributor III

Need help to optimize Pivot Tables ofr QlikSense

Hello Experts,

                           I am looking for some help on the below scenario,

 

I have already 2 existing QVDS for 2 separate dates which are almost 500 MB each, In each of them there are 50+ columns , and for my visualization I need to put the values from 2 different dates picked up from 2 different QVDs and put them side by side for comparison. I need to display suppose 20 of such columns for  similar products which are in hierarchical structure spread across different column and I need to display this in a PIVOT table.

The problem is whenever I expand /collapse or put any filter on any of the  columns , then it takes significant time to refresh and reflect in the pivot table.

I have tried to load 20 of those 50 columns which I actually need in my visualization but it does not help much.

Could you please help and provide any suggestion to optimize this performance.

 

Labels (1)
1 Solution

Accepted Solutions
marcus_sommer

I could only reference to my answer above to make the essential work within the script. Comparing millions of records against dozens of columns - it couldn't be done manually within an acceptable time and error-rate (of viewing) - independent if it's done directly in Qlik or then exported. I assume that you export in to Excel in chunks and then plotting various checks or lookups against them. I'm quite sure it will be easier and faster in Qlik as within each other tool.

- Marcus

View solution in original post

3 Replies
marcus_sommer

Just with these information it's difficult to say if there is any significantly potential to optimize the pivot. But IMO the approach itself isn't really suitable. Comparing two dozens of columns against each other over probably a lot of rows within a pivot is IMO not sensible.

Therefore I suggest to make the essential work already within the script. Your mentioned 50+ columns indicates that the data-structure might be a crosstable - if so I recommend to consider to transform it to a stream-data structure.

- Marcus

noviceneil
Partner - Contributor III
Partner - Contributor III
Author

Actually the situation is these are pre-existing qvds created from a different system and we have to reuse those. each time I have to take 2 such qvds of 2 different dates ( with similar columns) where each of these QVD can range betwenn 3-7 GB and plot a comparison between 2 dates for the same items side by side. 

 

 

marcus_sommer

I could only reference to my answer above to make the essential work within the script. Comparing millions of records against dozens of columns - it couldn't be done manually within an acceptable time and error-rate (of viewing) - independent if it's done directly in Qlik or then exported. I assume that you export in to Excel in chunks and then plotting various checks or lookups against them. I'm quite sure it will be easier and faster in Qlik as within each other tool.

- Marcus