Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello all.
Table2 is very large so I'm asked to read every 100 lines of common ids and the relevant info and store these in a new table because i don't have the ram and storage space to load all tables at once and do incremental loads.
I have come up with this so far, it is not working. Any ideas?
Table1_ids:
LOAD pk as table1_pk;
SQL SELECT pk
FROM public.Table1
ORDER BY `pk` asc;
MaxTab1Pk:
LOAD max(pk) as MaxTab1Pk,
min(pk) as MinTab1Pk;
SQL SELECT pk
FROM public.Table1;
Let MaxTab1Pk = Peek('MaxTab1Pk', 0 , MaxTab1Pk);
Let MinTab1Pk = Peek('MinTab1Pk', 0 , MinTab1Pk);
Set a=$(MinTab1Pk);
//tab2_ids
Do while a<= $(MaxTab1Pk)
LOAD pk as table2_pk,
table1_fk as table1_pk;
SQL select `pk`, `table1_fk` as table1_pk
from public.Table2
where public.Table2.`table1_pk`= public.Table1.`pk`;
Let a=a+1;
//store to a load inline table? a csv?
Loop
If you can't do a full load into memory, then you'd want to store the table to disk in some format. From the Qlik side, QVDs are more optimized for loading back into Qlik, which means they are the ideal format. To mock up and example to PostgreSQL, this code is working on my end:
// Offload the min / max to the Database if possible; it'll save you a step
MinMaxTable:
LOAD
maxid,
minid;
SQL SELECT
max(id) AS maxid,
min(id) AS minid
FROM public.test;
LET MaxID = PEEK('maxid', 0 , MinMaxTable); // Store Maximum value
LET MinID = PEEK('minid', 0 , MinMaxTable); // Store Mininum value
DROP TABLE [MinMaxTable]; // Drop the helper table since it isn't needed in future loads
LET a=MinID; // Set A to the minimum. In my case, 1
LET b=a+99; // Set B to 100 more (inclusive) to A
DO WHILE a <= $(MaxID)
// Load Records between A and B
[table]:
LOAD
id AS [id],
t AS [text];
SQL SELECT
"id",
"t"
FROM "public"."test" WHERE "id" >= $(a) AND "id" <= $(b);
STORE [table] INTO [lib://data/loopExample/table_$(a).qvd] (QVD); // Store the table to a QVD
DROP TABLE [table]; // Drop the table. Without the STORE / DROP, the table will auto-concat which puts us back to the starting point of the table not fitting into memory
LET a=a+100; // Increment A to the next 100. In run 1, A will become 101
LET b=a+99; // Increment B to the next 100. In run 1, B will become 200
LOOP
Good luck
If you can't do a full load into memory, then you'd want to store the table to disk in some format. From the Qlik side, QVDs are more optimized for loading back into Qlik, which means they are the ideal format. To mock up and example to PostgreSQL, this code is working on my end:
// Offload the min / max to the Database if possible; it'll save you a step
MinMaxTable:
LOAD
maxid,
minid;
SQL SELECT
max(id) AS maxid,
min(id) AS minid
FROM public.test;
LET MaxID = PEEK('maxid', 0 , MinMaxTable); // Store Maximum value
LET MinID = PEEK('minid', 0 , MinMaxTable); // Store Mininum value
DROP TABLE [MinMaxTable]; // Drop the helper table since it isn't needed in future loads
LET a=MinID; // Set A to the minimum. In my case, 1
LET b=a+99; // Set B to 100 more (inclusive) to A
DO WHILE a <= $(MaxID)
// Load Records between A and B
[table]:
LOAD
id AS [id],
t AS [text];
SQL SELECT
"id",
"t"
FROM "public"."test" WHERE "id" >= $(a) AND "id" <= $(b);
STORE [table] INTO [lib://data/loopExample/table_$(a).qvd] (QVD); // Store the table to a QVD
DROP TABLE [table]; // Drop the table. Without the STORE / DROP, the table will auto-concat which puts us back to the starting point of the table not fitting into memory
LET a=a+100; // Increment A to the next 100. In run 1, A will become 101
LET b=a+99; // Increment B to the next 100. In run 1, B will become 200
LOOP
Good luck
Hi @Levi_Turner , I figured it out already and did something very similar although yours looks very well put together, i will keep that in mind for future use 🙂 Very helpful!
Refering to that setting, please have a look at my next thread on this , maybe you could help me
Thanks in advance!
This community rocks!
@Levi_Turner Hi, I run your script and it loads as you said but i saw it now, by 100 (1, 101,201 etc), while i wanted it to store batches of 100 ids each time, for example read id 1 to 100, then store, then go from 101 to 200 then store. How can I do this?