We are replicating Data from our operational System on Db2 Z/Os to a Datalake on S3 for analytic purposes.
We are replicating the data in a S3 Bucket to store all the data and especially the changes in a datalake.The replicate task is defined with stored changes and DDL option apply to change table.
The apply to chnages is used becasue we want to have the DDL changes automatic available in the Datalake.
For several reason a change on a table on the source ends up, that the table has to be dropped, and recreated new and the data has to reloaded in this new created table. When this table is replicated the drop table ends up in deleting also all the data in the target. But because we are replicating the data into a datalake, we don't want to delete this history of the data. The idea of datalake is never delete any data.
Is it possible to change the behavior of the task, that when a table is dropped in the source, this action does not end up with deleting the data in the target?
There is similar behaviour available with apply changes. There it is possible to have the option ignore drop table.
Can such option also be included in stored changes?