Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi,
I have a question not a problem, I use data from a datawarehouse. In some fact tables we also use "dimensiondata" as fields.
For example, a have 1 million rows in my fact and on every row there is a errorcode. I now want to use errorcode as my dimension in my pivot and calculate a measure from my fact table. No the question, is it better to keep the errorcode as an own dimension in the model or does this matter when I calculate this in my object (for ex pivot table)?
Just to add some information, lets say that I only have 5 different errorcode and want to sum on these 5 codes.
If I have the dimension from fact I will have 1 Million rows as my dimension, if I instead rebuild so I have errorcode as a dimensiontable in my model I have 5 rows as my dimension.
Not sure if understood right. Is it something like (5 rows X 1 M cols) Vs (1 M rows X 5 cols)? If so, it would be better to convert the data in the script using probably Crosstable statement. That would not only improve the front-end performance; would simplify the expressions to get your desired result as well.
If you do the transformation at the back-end, it would increase the reload time which is one time job before it gets to be refresed, while for every selection in the front-end memory performance indicator would get a hit. Hence, it would probably be better to take transformation to the script.
Ok, I think you "over analyze" my question, will try to explain more.
This is my fact (transactions):
fact:
LOAD
Ordernumber, //this could be a key to my dimtable
Measure,
dimvalue
from fact;
This fact is 1 Million rows and I will use dimvalue as dimension and sum(Measure) as expression in my pivot.
ex2:
fact:
LOAD
Ordernumber as KeyToDim
Measure
from fact;
dim:
LOAD
DimKey as KeyToDim
dimvalue
from dimtable;
Fact will still be 1 Million rows and dimtable only 5 rows. If I now use dimvalue as dimension and sum(Measure) as expression in my pivot, which of these is most memory friendly.
I dont this will increase my reload time.
Ah.. I guess performance wise(at the front-end) loading a single table(with all 1 million dimvalues) would be better. Because that would require no hopping at all. And if no hopping is required, wihout having to create possibly combinations of data, possibly less memory would be required.
Ok, with hopping you mean that when calculate in object, qlikview wont need to "look" into another table before present the result in aggregation? This is also front end, because when select the pivot is recalculated.
Yes