Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
For a Replicate task using HANA source with trigger-based CDC, what is the best way to distribute the same source table across different target databases? Should we just create multiple tasks reading to the same HANA table source?
Qlik Replicate November 2024
Source: HANA S/4
Targets: SQL Server 2019
Hello @Al_gar ,
I'm not entirely sure how the distribution logic is defined, whether it's based on schema, primary key values, or some other criteria.
However, one practical approach is to use Log Stream to capture the changes in a single task, and write them to a staging area files. Then, you can create multiple downstream tasks that read from this staging task and distribute the data to the appropriate target databases.
This method offers better scalability and central control over data distribution logic, especially when multiple targets are involved.
Hope this helps.
John.
Hi @john_wang ,
Thanks. Following Replicate documentation I got confused for a moment on step 3 "Duplicate the source endpoint and add it to the log stream staging task" because after duplicating the source endpoint connection the "Read changes from log stream staging folder" check box wouldn't show up:
However, when I tried to create a new endpoint connection the check box will show up:
After adding the source endpoint's details I was able to get the task created and running. Thanks.
Hello @Al_gar ,
I'm not entirely sure how the distribution logic is defined, whether it's based on schema, primary key values, or some other criteria.
However, one practical approach is to use Log Stream to capture the changes in a single task, and write them to a staging area files. Then, you can create multiple downstream tasks that read from this staging task and distribute the data to the appropriate target databases.
This method offers better scalability and central control over data distribution logic, especially when multiple targets are involved.
Hope this helps.
John.
Hi @john_wang ,
Thanks. Following Replicate documentation I got confused for a moment on step 3 "Duplicate the source endpoint and add it to the log stream staging task" because after duplicating the source endpoint connection the "Read changes from log stream staging folder" check box wouldn't show up:
However, when I tried to create a new endpoint connection the check box will show up:
After adding the source endpoint's details I was able to get the task created and running. Thanks.
@john_wang, I forgot to ask if there's any limitation on the amount of tables that you can load on a single Log Stream task.
Hello @Al_gar ,
Personally I do not see such a limitation, in our customers there are than more several thousands tables were included in a single task.
Regards,
John.
Hi @Al_gar
All I would add to what @john_wang has shared is that you will need more disk space for the audit files the more tables you have in the staging task.
Thanks,
Dana
Thanks @john_wang and @Dana_Baldwin.
Thank you for your support! @Al_gar