Those are true performance numbers but they are not guaranteed. What will help is to ensure the QVD read is an OPTIMIZED read and not a STANDARD read. What causes the read to be standard depends entirely on the transformations that you are doing (adding a WHERE clause). There is a bit here in the help and there is an excellent collection of topics in the next thread from Marcus sommer. the principles pertain to qlikview and sense but the syntax is slightly different in sense
Putting the calculations in the SQL clauses will help remove qlik calculations and logic that would prevent optimized reloads so that everything is 'baked into' the QVD when you are ready to rapidly read it during the incremental refresh.
Its true that the initial seeding of the large QVDs will take time but the incremental load and QVD load SHOULD be very quick .
Let me make my question a bit more clear...
Here is an example of incremental load from help.qlik.com:
SQL SELECT PrimaryKey, X, Y FROM DB_TABLE
WHERE ModificationTime >= #$(LastExecTime)#
AND ModificationTime < #$(BeginningThisExecTime)#;
Concatenate LOAD PrimaryKey, X, Y FROM File.QVD;
STORE QV_Table INTO File.QVD;
In this example:
1. The "Concatenate LOAD PrimaryKey, X, Y FROM File.QVD" statement will load the entire "File.QVD";
2. The "STORE QV_Table INTO File.QVD" statement will re-write the entire "File.QVD" file.
If this file is huge, then it will take time to first load and then write it.
The question is:
How long such an incremental load will take for a 500Gb "File.QVD" whlie the incremental data is 10Gb?