Skip to main content

Official Support Articles

Search or browse our knowledge base to find answers to your questions ranging from account questions to troubleshooting error messages. The content is curated and updated by our global Support team

Kafka - Does Replicate guarantee that a message is delivered only once and exactly the right order?

Showing results for 
Search instead for 
Did you mean: 
Community Manager
Community Manager

Kafka - Does Replicate guarantee that a message is delivered only once and exactly the right order?

[publishing on behalf of Global Support]

Replicate guarantees that messages are delivered to Kafka at least once.

Each message contains a “change sequence” field (same as in CT tables), which is monotonically increasing. In case a message was produced to Kafka more than once, the customer is able to detect it and ignore that message.
Replicate produces messages in batches. At the same time, different batches are sent to different broker machines (depends which broker is the leader of which partitions at a given time).

It is possible that record X is produced to broker B1 and record X+1 is produced to broker B2.

Broker B2 might respond fast and return an acknowledgment for X+1, while broker B1 might be slower (or down) and record X will get into recovery or fail.
In that case, Replicate task will start sending the stream of records as follows:

  • Replicate v6.2 and lower -- from the earliest record that failed (record X).
  • Replicate v6.3 and higher -- from the beginning of the failed transaction.


  • It is possible that some of the following records (X+1) will be duplicated on Kafka.
  • As mentioned above, the customer can easily filter out these duplicates, if exist.
Tags (2)
Labels (1)
Creator II
Creator II

Does Replicate support idempotent producer?

Not applicable

@Melissa_Potvin  I have a question or two concerning this and how perhaps the Replicate product has evolved.

We are still running some replication to Kafka on Replicate version 5.5, and we identified cases where change records would be dropped if a leader change occurred in the cluster.  Replicate would not see this as a failure and would not re-send the batch.

We had to employ an internal parameter to force acknowledgment (rdKafkaTopicProperties set to acks =all) to guarantee delivery.  We still use it to this day.

Has this scenario changed in newer versions of Replicate?


In 6.4xx and newer,

you can use the following :

You can change the value by doing these steps:

1. Open the Kafka endpoint, click on the Advanced tab.

2. Click on Internal Parameters.

3. Add an internal parameter named "rdkafkaTopicProperties" give it a value of "request.required.acks=all"


Is the above good with latest 2021.11 version as well?

Version history
Last update:
‎2022-01-20 10:52 AM
Updated by: