Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello Everyone,
Below code works for me on limited data set but on full load app throws Object out of memory.
Is there better/optimize way to tackle below code?
Granularity of data is at month level and Join condition between two tables is below Composite Key.
without left join..numbers from Table 'ABC' are not giving the desired results..
you can simply keep the tables as are
qlik will link them via Key
and you calculate the count on the UI
Count(Distict {<Expected_Cust={"yes"}>} Cust_id)
I know but in my case data volume is on higher side and app throws out of object memory error
Plus without left join count from table 'ABC' is not giving the desired O/P.
It's not clear for me - the load runs into a memory error or afterwards any UI object?
Load runs into memory error
The load with just a few fields and the group by inside of the sql looked rather small and I wouldn't expect memory issues unless the datasets are really huge and going into billions of records. Therefore how many records are there and how many free RAM is available? Further make sure that there are no further data within the data-model before you perform this joining.
Close to 1.3 billion records from table 'ABC'.
There are several approaches thinkable which may reduce the RAM consumption but the most obvious one would be to implement an incremental logic.
Personally I would tend to distribute such task to at least two layers and the first one would be the slicing of ABC in YYYYMM periods and storing them into qvd's. Similar would it be with the XYZ join-data and the join would then be applied within the next layer.
Beside this you may try to remove the autonumber() from the joins because I'm not sure at which work-step the autonumber() released the RAM. Also a storing of the entire ABC as qvd is an option and would mean that the autonumber() couldn't remain. Depending on the Cust_ID and Brand you might be able to create a numeric concatenation instead of a string one.
A further reducing might be the filtering of the join-table against the existing ABC key-values - maybe also in a workstep in beforehand and then loading it as mapping table and merging then the data per:
subfield(applymap())
instead of the join.
Before you goes in any direction it would be helpful just to load ABC and the looking how many RAM is consumed and of course further available to get a feeling if your environment is general suitable sized for your wanted data-set.
A couple of thoughts.
AutoNumberHash128(Month, Cust_ID, Brand) as Key
may use less memory than AutoNumber.
You mentioned you were not getting correct results when kept as separate tables. That may be because you have extra rows that are remove by the Left Join. Try keeping as separate tables but with a where exists() to filter.