Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi
We are building a App to showcase the email activity report
I just started to work on App build and I am very new to this. I will try my best to describe the issue I am facing.
This App contains 4 main tables
The first three tables are connected using "Campaign_ID" and this is working as expected. Now I would like to connect "Lead / Users " table to activity table using "Lead_ID" as mentioned in screenshot.
Concatenate is not option for me since the data is huge.
Please let me know how can I connect all these tables without synthetic error.
If I interpret the data-set right than all activities-tables contain mainly the same data and just the state is different. In this case I would simply concatenate these tables by harmonizing all field-names and data-structures - means there is only a single ID and Lead_ID and Date and so on field and all differentiating between them would be with an extra Source field which contains the information of open/click/send/delivery.
That the data-set is huge is rather further a good reason to concatenate the data instead of keeping them separate.
If I interpret the data-set right than all activities-tables contain mainly the same data and just the state is different. In this case I would simply concatenate these tables by harmonizing all field-names and data-structures - means there is only a single ID and Lead_ID and Date and so on field and all differentiating between them would be with an extra Source field which contains the information of open/click/send/delivery.
That the data-set is huge is rather further a good reason to concatenate the data instead of keeping them separate.
Yes. I followed your suggestion and successfully merged these tables. Everything is functioning smoothly. I have a question, though: with the tables now concatenated, the row count is approximately 31 million, and additional data will be added daily. Is this acceptable, or might it impact loading times or potentially pose issues in the future?
The number of records within a table is only partly important - more relevant are the numbers of distinct field-vales within the various fields - whereby this is quite irrelevant to the kind of the data-model. The impact of the cardinality is here very well explained: The Importance Of Being Distinct - Qlik Community - 1466796.
Beside of this it depends on the implementation of the view-requirements (good are simple sum() and count() maybe with some set analysis and rather bad are (nested) if-loops and aggr() constructs), the response expectation of the users and of course the available hardware if a data-set with a few thousand records is already (too) slow or if the UI is with billions of records further fast enough.
Further considerations may go to load the data with incremental approaches and/or applying a mixed granularity of the facts: Fact Table with Mixed Granularity - Qlik Community - 1468238.