Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
I have a table with 100.000.000 records which is also the initial data load.
Already after retrieving 2.000.000 records an additional dataset of 10.000 takes 2 seconds, which increases steadily.
In this pace importing all the records will take forever.
Should I use another tool to populate the QVD?
Or can I directly persist the SQL data to the QVD file as it seems that all data is now first loaded into memory.
The script is as follows:
LIB CONNECT TO 'SQL_DATAMART';
[qvd-test]:
Why dont you use incremental load?
Because the table does not have a column which can be used to identify the load point
(the table is completely refreshed everyday)
That the table is completely refreshed everyday doesn't mandatory mean that no incremental approaches are possible - especially if some essential parts like adding a creation/change timestamp to the records is possible within the database. Another option might be not to load the entire table in one run else to slice it within appropriate loops against period- or category-information or whatever is possible.
Further thinkable is not to load this table - probably a view of the transformation from n other tables - else going more directly to the sources.
Beside this the observed slowdown should not be caused from Qlik else rather from the database or the driver or the network which restrict the performance - more or less intentionally by any settings and/or by any caching-logic. You may also take a look on the RAM consumption of the Qlik server because if the system is forced to swap any data to the virtual RAM the performance will usually drop significantly - where jus 2 M of records should have such impact - unless the system is already near the max. workload.