Skip to main content
Announcements
NEW: Seamless Public Data Sharing with Qlik's New Anonymous Access Capability: TELL ME MORE!
cancel
Showing results for 
Search instead for 
Did you mean: 
MrBruno
Contributor II
Contributor II

SQL 2 QS : data loading incrementally slows down

I have a table with 100.000.000 records which is also the initial data load.
Already after retrieving 2.000.000 records an additional dataset of 10.000 takes 2 seconds, which increases steadily.
In this pace importing all the records will take forever.

Should I use another tool to populate the QVD?
Or can I directly persist the SQL data to the QVD file as it seems that all data is now first loaded into memory.

The script is as follows:
LIB CONNECT TO 'SQL_DATAMART';

[qvd-test]:

NOCONCATENATE
LOAD 
*;
SQL 
SELECT *
FROM   dbo.FactTable
;
 
STORE [qvd-test] INTO [SQL data.qvd];
 
DROP Table [qvd-test];
Labels (1)
3 Replies
JHuis
Creator III
Creator III

Why dont you use incremental load? 

MrBruno
Contributor II
Contributor II
Author

Because the table does not have a column which can be used to identify the load point
(the table is completely refreshed everyday)

marcus_sommer

That the table is completely refreshed everyday doesn't mandatory mean that no incremental approaches are possible - especially if some essential parts like adding a creation/change timestamp to the records is possible within the database. Another option might be not to load the entire table in one run else to slice it within appropriate loops against period- or category-information or whatever is possible.

Further thinkable is not to load this table - probably a view of the transformation from n other tables - else going more directly to the sources.

Beside this the observed slowdown should not be caused from Qlik else rather from the database or the driver or the network which restrict the performance - more or less intentionally by any settings and/or by any caching-logic. You may also take a look on the RAM consumption of the Qlik server because if the system is forced to swap any data to the virtual RAM the performance will usually drop significantly - where jus 2 M of records should have such impact - unless the system is already near the max. workload.