Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
I have a source table called stringvalue. I would like to replicate that source table to a target table of the same name without any filtering. I also want to replicate that same source table, stringvalue, to a new target table called stringvalue_doc which will contain a subset of the data in the source table based on a filter. I want to repeat this process for several other target tables based on different filter setttings. Is this possible?
Thanks, Dave
Hello Dave, @DaveC
Thanks for reaching out to Qlik Community!
Log Stream enables a dedicated Replicate task to save data changes from the transaction log of a single source database and apply them to multiple targets. If you have more than 2 target endpoints while the source is a single one, please introduce Log Stream.
Hope this helps.
John.
Thanks John. I'm already using log streams so I know how to have a source replicated to two different target databases. In my scenario, I would like to have all of the target tables in the same schema. If I could do this in a single logstream task it would be preferable. If not, is it possible to have a second logstream task which writes to the same target database?
Thanks, Dave
Hello Dave, @DaveC
Thanks for the update.
You need NOT a second logstream task. you may use the Global Transformation Rule in the children tasks to rename all tables to a specific schema. A sample:
Regards,
John.
Hello Dave, @DaveC
Thanks for the update.
You need NOT a second logstream task. you may use the Global Transformation Rule in the children tasks to rename all tables to a specific schema. A sample:
Regards,
John.
@DaveC You need multiple tasks. Any given source table can only be replicate to a single target table within the same task. The desire to 'duplicate' a single source table keeps coming up over time, but I did not believe it was ever turned into a formal feature request. There could be a lot of resource saveing if this was avaiable, but I doubt it makes commercial sense for Qlik.
Since your example talks about a filtered clone not only in the same database, but also in the same schema I have to wonder why you do not 'simply' setup a VIEW on the base table with filter to present the second table. Yes there would be challenges. Like the base table may only have columns A,B,C,P,Q,R from the source and the second table need A,B,D,E,P but for that you will just need to make the base table also a view on an intermediate table containing all columns any view might need. And maybe 'that' base table needs a transformation on table P making it P1 where the second table needs a different incompatible transformation P2. There also you may need a view for both and add both P1 and P2 to the intermediate target each with their own transformation.
@john_wang "You need NOT a second logstream task. ..." Hmmmm, it seems to me you did not interpret the customer question and clarification correctly, or maybe I missed something.
Hein
Hi Hein,
Thanks for your input. My issue is that the source table had 25 million rows or so and I want to break it up into multiple tables based on a column in the source table. That way the target table(s) will be smaller in size and return results significantly quicker than querying the entire table. That is why a view wouldn't solve my problem as it would still be referencing a 25 million row table.
I'll try using a separate task for each target table I would like based on the one source table. Hopefully it will be OK with having multiple tasks for the same target schema.
Thanks, Dave
Hi @DaveC
If this is a feature you would like to see in the product, please submit a feature request here: https://community.qlik.com/t5/Ideas/idb-p/qlik-ideas
It will be reviewed by our Product Management team. Other users can vote it up as well.
Hope this helps!
Dana
Hi @DaveC , @Heinvandenheuvel
Thank you so much to point that out, Hein.
>> @john_wang "You need NOT a second logstream task. ..."
In my first glance I thought it's a cascaded logstream task so I said it's unnecessary.
Regards,
John.
Multiple tasks writing to the same target db is just fine. Obviously the price of doing so is multiple reads of the change logs which may be cheaper, and in a better suited server, but it could also be a disproportionately larger cost if the target table(s) have a low frequency hit rate.
>>>> That is why a view wouldn't solve my problem as it would still be referencing a 25 million row table.
Well, with a half decent target DB those views could perform real well and NOT feel like a 25 million row base table tthrough storage partitioning or a leading <subtablename> prefix to critical indexes.
Hein.