Skip to main content
Announcements
NEW: Seamless Public Data Sharing with Qlik's New Anonymous Access Capability: TELL ME MORE!
cancel
Showing results for 
Search instead for 
Did you mean: 
mmarchese
Creator II
Creator II

Best practices for performance?

I am wondering which scenario has the better performance in each of the pairs below.  The metric I am most interested in is visualization rendering time, both initially and in response to user selections.  I'm less concerned with memory consumption or load-script duration.

 

Measure = Sum(FieldA) * 1000  // 1. Aggregate first
Measure = Sum(FieldA * 1000)  // 2. Aggregate last

Measure = Sum(FieldA + FieldB)       // 1. Aggregate last
Measure = Sum(FieldA) + Sum(FieldB)  // 2. Aggregate first

NewField = FieldA * 1000  // 1. Make another field while loading data
Variable = FieldA * 1000  // 2. Make a variable instead using "Set var = ...;"

NewField = FieldA + FieldB  // 1. Make another field while loading data
Variable = FieldA + FieldB  // 2. Make a variable instead using "Set var = ...;"

 

In each case, I put the option that I think should result in more responsive visualizations first, but I don't have the empirical data or knowledge of what's under the hood to back it up.  I don't know 1) how the data is physically arranged in memory, 2) whether Qlik executes formulas as is or rearranges them for efficiency like SQL query optimizers, or 3) whether things like caching or preloading come into play.

Labels (1)
1 Solution

Accepted Solutions
rwunderlich
Partner Ambassador/MVP
Partner Ambassador/MVP

Let's deal with the 3rd & 4th examples first.  Precalculating in the script, when possible, will always yield faster response time in the front end. Yes, there will be additional memory in the data model but it's likely to be trivial.  So creating a new field in the script is almost always the better performance choice. 

With regards to the question of multiplying by the total sum vs multiplying each individual value. Theoretically there are fewer instructions executed when doing a single multiplication by the entire sum(). 

"Shouldn't the latter be at least twice as slow since it's twice as many operations?  Or is there something I am not considering?" There is a lot more going on (such as marshalling the data) in calculating a measure beyond the specific calc we see, so "twice as much" only applies to a portion of the overall calc time. I ran some timing tests on your examples with 7M rows and I agree with @jonathandienst that the differences are trivial -- at most ~5% better.

Importantly as @jonathandienst pointed out, the difference expressions may return different results.  We can't always ignore that.  For large numbers,   "Sum(x * 1000)" and "Sum(x) * 1000" will return slightly different results due to rounding . So what's the business intent? Sum up the value and express the result as a factor of 1000 or factor each value by 1000 and sum the total?

The Qlik engine is very sophisticated in it's optimization. It's possible that "Sum(x) + Sum(y)"would calculate faster than than "Sum(x + y)" because each Sum could be calculated in a separate CPU thread.  Qlik is also very good at distributing data rows of a sum across multiple threads, so that may be just as good.  I can't observe a significant difference between the two options.  

Critically important though is that "Sum(x) + Sum(y)" and "Sum(x + y)" return a different result (front end ) if x and y are in different tables.  So you should always first be driven by getting a correct and consistent business result, performance secondary as required. 

All that said, performance in Qlik is always a fascinating topic. My colleague @Oleg_Troyansky leads a three hour Performance session at the Masters Summit for Qlik if you want to geek out on Qlik performance details

-Rob
http://masterssummit.com
http://qlikviewcookbook.com
http://www.easyqlik.com

View solution in original post

6 Replies
jonathandienst
Partner - Champion III
Partner - Champion III

Only the first pair produce the same result. I think it would be very hard to detect any difference in render time for these two, even for large datasets, although i would expect (1) to be a marginally faster.

The second pair will produce different results if there are any null values in one field and non-null values in the other, as adding anything with a null produces a null.

I don't understand how you would like to use the variables in the second two examples. Loading a field (a set of values) and a variable (a scalar)  are certainly not equivalent.

Logic will get you from a to b. Imagination will take you everywhere. - A Einstein
mmarchese
Creator II
Creator II
Author

What is your conclusion that they'd have similar performance based on?  I would think that given many rows of data, the choice is between [adding millions of times] and [multiplying once] versus [adding millions of times] and [multiplying millions of times].  Shouldn't the latter be at least twice as slow since it's twice as many operations?  Or is there something I am not considering?

Assume all pairs produce the same result since the question would be pointless otherwise (so disregard situations with nulls).

Sorry if my terminology was not clear.  When I say a field, I mean a column that comes right from a table or a calculated column that is derived directly from tables, rather than requiring aggregation.  When I say "a variable made with Set," I'm talking about a text macro, not a scalar.  The two options I was getting at are 1) adding two columns together at load time to create a third column and then referencing that new column in your dimensions and measures, or 2) creating a text-macro variable that consists of a formula adding those two columns together and then referencing that variable in your dimensions and measures instead.  Intuitively, option 1 strikes me as much faster (since the work has already been done) at the cost of more memory.  Here's a more detailed example:

Option 1:

 

// load script
LOAD
  FieldA,
  FieldB,
  FieldC = FieldA + FieldB
...

// worksheet
Dimension = FieldC
Measure = Sum(FieldC)

 

Option 2:

 

// load script
LOAD
  FieldA,
  FieldB
...

Set VariableC = FieldA + FieldB;

// worksheet
Dimension = $(VariableC)
Measure = Sum($(VariableC))

 

 

rwunderlich
Partner Ambassador/MVP
Partner Ambassador/MVP

Let's deal with the 3rd & 4th examples first.  Precalculating in the script, when possible, will always yield faster response time in the front end. Yes, there will be additional memory in the data model but it's likely to be trivial.  So creating a new field in the script is almost always the better performance choice. 

With regards to the question of multiplying by the total sum vs multiplying each individual value. Theoretically there are fewer instructions executed when doing a single multiplication by the entire sum(). 

"Shouldn't the latter be at least twice as slow since it's twice as many operations?  Or is there something I am not considering?" There is a lot more going on (such as marshalling the data) in calculating a measure beyond the specific calc we see, so "twice as much" only applies to a portion of the overall calc time. I ran some timing tests on your examples with 7M rows and I agree with @jonathandienst that the differences are trivial -- at most ~5% better.

Importantly as @jonathandienst pointed out, the difference expressions may return different results.  We can't always ignore that.  For large numbers,   "Sum(x * 1000)" and "Sum(x) * 1000" will return slightly different results due to rounding . So what's the business intent? Sum up the value and express the result as a factor of 1000 or factor each value by 1000 and sum the total?

The Qlik engine is very sophisticated in it's optimization. It's possible that "Sum(x) + Sum(y)"would calculate faster than than "Sum(x + y)" because each Sum could be calculated in a separate CPU thread.  Qlik is also very good at distributing data rows of a sum across multiple threads, so that may be just as good.  I can't observe a significant difference between the two options.  

Critically important though is that "Sum(x) + Sum(y)" and "Sum(x + y)" return a different result (front end ) if x and y are in different tables.  So you should always first be driven by getting a correct and consistent business result, performance secondary as required. 

All that said, performance in Qlik is always a fascinating topic. My colleague @Oleg_Troyansky leads a three hour Performance session at the Masters Summit for Qlik if you want to geek out on Qlik performance details

-Rob
http://masterssummit.com
http://qlikviewcookbook.com
http://www.easyqlik.com

mmarchese
Creator II
Creator II
Author

Thanks for the detailed reply, which was very helpful.  With that many rows, I was not expecting aggregating before multiplying to be only 5% faster than multiplying before aggregating - interesting.

I have some follow-up questions if you have time:


Critically important though is that "Sum(x) + Sum(y)" and "Sum(x + y)" return a different result (front end ) if x and y are in different tables.


My load script creates one big Qlik table from numerous database tables, where each Qlik row represents a sale.  It does this importing the first database table, left joining each subsequent table to it, and then doing a resident load to replace any nulls with appropriate defaults such as 0 for numeric columns and 'None' for text columns.  I believe this means that in my case, those two formulas should always be equivalent, no?


For large numbers,  "Sum(x * 1000)" and "Sum(x) * 1000" will return slightly different results due to rounding . So what's the business intent? Sum up the value and express the result as a factor of 1000 or factor each value by 1000 and sum the total?

I have two table columns: [cost per thousand pieces] and [pieces].  To get the total combined cost for all selected sales, I have to do [cost per thousand pieces] * [pieces] / 1000 and aggregate via Sum().  Therefore, I must choose between these options:

measure = Sum([cost per thousand pieces] * [pieces]) / 1000
measure = Sum([cost per thousand pieces] * [pieces] / 1000)
measure = Sum([cost per thousand pieces] / 1000 * [pieces])

I don't really know which would have the least rounding error or which would express the business intent most directly.  I suspect it doesn't really matter since I don't really need high precision here, but do you have any comments?  Actually, at this point, I'm leaning toward making a new field in the load script instead since I don't see why everything can't be precalculated except for the aggregation:

[new cost field] = [cost per thousand pieces] * [pieces] / 1000
measure = Sum([new cost field])

I also need to display [cost per thousand pieces] and [pieces] themselves, so I plan to retain those original fields.  And whichever method I pick will be duplicated a bunch of times since I have to go through this with a dozen different cost columns.  I could also take the middle ground by doing some of the non-aggregating math in the load script and some of it the fly, like this:

[new thousands of pieces field] = [pieces] / 1000
measure = Sum([cost per thousand pieces] * [new thousands of pieces field])

 

rwunderlich
Partner Ambassador/MVP
Partner Ambassador/MVP

" I believe this means that in my case, those two formulas should always be equivalent, no?"  Yes, I would agree.

"Actually, at this point, I'm leaning toward making a new field in the load script instead since I don't see why everything can't be precalculated except for the aggregation"  I think this is the best approach for performance and consistency.  I would also consider rounding the new values to the same precision as your accounting system, usually .01.  That will reduce the overall number of values that Qlik has to store and yield consistent results with any slice of data.

-Rob

mmarchese
Creator II
Creator II
Author

Ok, thanks!

For others who happen to read this thread, the reasoning behind Rob's tip to round currency to reduce the number of distinct values that Qlik has to store and thus increase performance is explained here:
https://community.qlik.com/t5/Qlik-Design-Blog/Symbol-Tables-and-Bit-Stuffed-Pointers/ba-p/1475369