We are looking to implement CDC from Oracle to Kafka. E.g. a CQRS/Event Sourcing type pattern.
I was reading the "Apache Kafka® Transaction Data Streaming" book.
https://www.qlik.com/us/-/media/files/resource-library/global-us/register/ebooks/eb-apache-kafka-tra...
In the book, on page 26, Table 4-2, it suggests a Kafka partition option :"Partition by transaction ID".
My first question is as follows. As this is a Qlik/Confluent book, I would expect this to be a supported option in Qlik Replicate. Is "Partitioning by transaction ID" supported by Qlik? I don't see the capability in the product documentation.
Can this be put on the backlog as a change request please?
https://help.qlik.com/en-US/replicate/Content/Replicate/April%202020/Setup_User_Guide.pdf
Other questions:
- On a refresh/reload, is there transaction detail in the messages? Is it the same transaction ID on all messages
- Again, with regard to multiple table writes within a single transaction, does Qlik Replicate give any guarantees about the ordering/sequence of messages put into Kafka.
- Does Qlik Replicate have the ability to allow a consumer know the boundaries of transactions in the events that are produced? similar to Debezium's Transaction marker functionality?