Skip to main content
Woohoo! Qlik Community has won “Best in Class Community” in the 2024 Khoros Kudos awards!
Announcements
UPGRADE ADVISORY for Qlik Replicate 2024.5: Read More
cancel
Showing results for 
Search instead for 
Did you mean: 
Xardas
Contributor II
Contributor II

Replicate Kafka messages ordering guarantees

Hi everyone,

I have a question about Replicate Kafka default messages ordering guarantees. It seems that by default Replicate uses at least once kafka semantics https://community.qlik.com/t5/Official-Support-Articles/Kafka-Does-Replicate-guarantee-that-a-messag.... If I understand correctly this means that for record V the following sequence may appear in a given kafka topic partition (assume we are using partition by key):

1.V+1

2.V

3.V+1

In this case, even though the duplicate (V+1) is present, the latest by offset value (V+1) is "correct" - it is the current state of this record. Now my question is, after the replication is completed (all the transactions have been processed), could also the following sequence be observed in a given topic partition while using default Replicate settings

1.V+1

2.V?

Here all the records have been successfuly sent to kafka, but in the wrong order (batching?) - the latest by offset value doesn't represent the latest state of the record, contrary to the previous example with a duplicate.

I also wonder, if to make sure that the latest by offset value is always "correct" (when the processing is completed), the Replicate user must configure an idempotent Kafka producer (enable.idempotence = true).

 

Regards,

Tomasz

Labels (2)
1 Solution

Accepted Solutions
john_wang
Support
Support

Hello Tomasz, @Xardas 

Welcome to Qlik Community forum and thanks for reaching out here!

Kafka is one of the endpoints that only allows "Transactional apply", meaning it applies each transaction in the order it is committed. Here is a detailed explanation, specially when multiple topics/partitions are involved: Kafka overview.

Having said that, Replicate is able to implement what is called an "idempotent producer", meaning that there is a way to ensure that messages are always delivered successfully, they are delivered in the right order and they are delivered without duplicates.
It is important to take into consideration that the right order applies to messages going to the same partition. If messages go to different partitions (because of the settings TOPIC/KEY/PARTITION) then the order among different partitions does not make sense.

Because of the nature of Kafka (being a messaging endpoint), specific configuration must be used in order to guarantee that messages are always delivered successfully, enable.idempotence=true is necessary.

Hope this helps.
John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!

View solution in original post

7 Replies
john_wang
Support
Support

Hello Tomasz, @Xardas 

Welcome to Qlik Community forum and thanks for reaching out here!

Kafka is one of the endpoints that only allows "Transactional apply", meaning it applies each transaction in the order it is committed. Here is a detailed explanation, specially when multiple topics/partitions are involved: Kafka overview.

Having said that, Replicate is able to implement what is called an "idempotent producer", meaning that there is a way to ensure that messages are always delivered successfully, they are delivered in the right order and they are delivered without duplicates.
It is important to take into consideration that the right order applies to messages going to the same partition. If messages go to different partitions (because of the settings TOPIC/KEY/PARTITION) then the order among different partitions does not make sense.

Because of the nature of Kafka (being a messaging endpoint), specific configuration must be used in order to guarantee that messages are always delivered successfully, enable.idempotence=true is necessary.

Hope this helps.
John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
Xardas
Contributor II
Contributor II
Author

Hello John,

Thanks for your quick reply!

Just to make sure I got this right - in other words Replicate itself creates the messages that are going to be send to Kafka in the right order. However to actually send them to Kafka, librdkafka is being used and it must be configured properly (enable.idempotence=true) to ensure the messages also always arrive in the right order (within the same partition). Correct?

Thanks,

Tomasz

john_wang
Support
Support

Hello @Xardas ,

This is correct.

We may get below line from task log file if the target apply set to Batch Apply Mode:

2024-08-02T15:08:41:138406 [TASK_MANAGER ]I: The "Batch optimized apply" option is not available when Kafka is the target endpoint. The "Transactional apply" option will be used instead.

So whatever the apply mode is set to, Qlik Replicate will work under "Transactional apply" option always.

The recommended internal parameters are:

 

rdkafkaProperties = "enable.idempotence=true;acks=all;max.in.fff.requests.per.connection=1"
rdkafkaTopicProperties = "acks=all"

 

Note replace "fff" by: 

john_wang_0-1722588530334.png

Hope this helps.

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
Xardas
Contributor II
Contributor II
Author

Perfect, thanks a lot for explaining!

However I'm a bit puzzled about the rest of the recommended internal parameters you mentioned.

librdkafka documentation states that enable.idempotence = true alone is enough to ensure that messages are produced exactly once and in the original producer order. The rest of the parameters will be adjusted automatically (unless modified by the user).

(https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md):

"When set to true, the producer will ensure that messages are successfully produced exactly once and in the original produce order. The following configuration properties are adjusted automatically (if not modified by the user) when idempotence is enabled: max.in.(...).requests.per.connection=5 (must be less than or equal to 5), retries=INT32_MAX (must be greater than 0), acks=all, queuing.strategy=fifo."

I'm puzzled why it is recommended to limit max.in.fff.requests.per.connection to 1 instead of using the default (when already using enable.idempotence = true) 5? Based on librdkafka docs it seems that Kafka ordering guarantees would be kept as long as this param is <=5 (when using enable.idempotence = true). I guess reducing it to 1 could be an unnecessary performance hit.

Also, I believe setting acks=all would be done automatically when using enable.idempotence = true (it is not necessary to set it explicitly).

Thanks,

Tomasz

john_wang
Support
Support

Hello Tomasz, @Xardas 

Thanks for the update.

From Kafka docs, yes, you are right. However if you only set enable.idempotence = true and leave other properties default values, you might get below error:

2024-08-02T21:20:27:324266 [TARGET_APPLY ]E: Failed to create Kafka handle: `acks` must be set to `all` when `enable.idempotence` is true. [1020401] (kafka_client.c:874)

It's same in all Qlik Replicate major versions, that's why I suggest you setting them even the default value is "all"/"-1".

Hope this helps.

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
Xardas
Contributor II
Contributor II
Author

Interesting, so acks=all, must be set explicitely, thanks!

What about max.in.fff.requests.per.connection, why is it recommended to limit it to 1, instead of using the default value (5) for the idempotent producer?

For reference, it seems that this issue https://issues.apache.org/jira/browse/KAFKA-5494 explains how Kafka preserves the ordering guaratees (when idempotence is enabled) even with 5>=max.in.fff.requests.per.connection > 1.

Thanks,

Tomasz

john_wang
Support
Support

Hello @Xardas ,

Thanks for the sharing, this is a very helpful information. I'd like to suggest you opening a support ticket, we will confirm with CF/R&D team for you.

Regards,

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!