Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi All
a. I have made a QVD document which loads a large csv file of 1,352,205 records
b. I then load a much smaller table - this is the "lookup table" for the first file. I use "join" and the large file becomes a large single file
When I make a selection in qlikview - it is quite slow and it shows the"loading tool"
I could load the 2 files separately and just let qlikview create a synthetic join.
My question is would be the faster load? I think the join is marginally faster? Any advantages either way?
Thank you
Jo
I assume that your join creates a lot of records because of a n:m relation between the tables and it would be helpful if you provide the used script, a few sample data and a screenshot from the tableviever.
- Marcus
Hi Jo,
if you do a separatly reload and got synthetic keys, than you will have problem to do a JOIN.
If you want to Join, use all "common" Fields from the synthetic table and create a composite key With all these Fields.
ex Field1 & Field2 & Field3 as %Key, do this in both tables.
But one thing: Do you need to JOIN? A normal assosiation should work and save a lot of memory for you.
Thank you Staffan and Marcus
I can either use a join or not explicitly use the join - with possible results being
Are you suggesting Staffan that I should make a composite key TestSet & TestName in both the BI_misc table and the Pathology table and this would make the load faster?
It is not just the load - this is fairly fast, it is the moving between sheets that causes the loading tool to appear.
Jo
If you are trying to do a lookup table, why cant you use Mapping load and Applymap()?
And create a Keyfield in your calendar and pathology table like:
Num(Floor(dateReceived)) As [%Key Date],
dateReceived As PathologyDate
-> Any specific relation between Pathology and BIMisc table are they joined with any specific ID's? Coz your syn table field names show TestSet and TestName, if they are not relevant try renaming the fields to remove synthetic keys?
As Marcus mentioned try and share your script that builds your data model for more suggestions for us to give.
Thanks,
V.
From what you describe, it's likely not the data model that is causing you performance problems. but rather the chart expression(s).
Can you post;
1) A tablebox that contains the fields $Tables and $Rows.
2) A screenshot of your chart, plus the expressions used in the chart.
-Rob
Hi Rob - this is what you meant?
where the loading tool comes up is mainly in an ordinary table box - which has all fields.
I suppose what this means is that the table is doing a lookup table operation 4 times for each record in order to add the fields from the BI_misc table - this means going through a lot of records - hence the loading tool.
Jo
result when using "join"
or result not using join - just the synthetic table
I was looking for the row count for each table.
-Rob
this one? for both scenarios?
TableName | Loose | #Records | #Fields | #Keys |
Pathology | 0 | 1352205 | 14 | 2 |
BI_misc | 0 | 1680 | 5 | 1 |
MasterCalendar | 0 | 732 | 12 | 1 |
Hi Jo,
The "loading tool" could be there if you do Your developing at Your local computer (Laptop). The memory is to low.
If you have the problem sitting on a qlikview server then its probably as rwunderlich mention, it caused by Your Charts.