Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Jan 20, 2022 10:52:02 AM
Mar 26, 2020 10:38:05 AM
[publishing on behalf of Global Support]
Replicate guarantees that messages are delivered to Kafka at least once.
Each message contains a “change sequence” field (same as in CT tables), which is monotonically increasing. In case a message was produced to Kafka more than once, the customer is able to detect it and ignore that message.
Replicate produces messages in batches. At the same time, different batches are sent to different broker machines (depends which broker is the leader of which partitions at a given time).
It is possible that record X is produced to broker B1 and record X+1 is produced to broker B2.
Broker B2 might respond fast and return an acknowledgment for X+1, while broker B1 might be slower (or down) and record X will get into recovery or fail.
In that case, Replicate task will start sending the stream of records as follows:
NOTES
Does Replicate support idempotent producer?
@Melissa_Potvin I have a question or two concerning this and how perhaps the Replicate product has evolved.
We are still running some replication to Kafka on Replicate version 5.5, and we identified cases where change records would be dropped if a leader change occurred in the cluster. Replicate would not see this as a failure and would not re-send the batch.
We had to employ an internal parameter to force acknowledgment (rdKafkaTopicProperties set to acks =all) to guarantee delivery. We still use it to this day.
Has this scenario changed in newer versions of Replicate?
In 6.4xx and newer,
you can use the following :
You can change the value by doing these steps:
1. Open the Kafka endpoint, click on the Advanced tab.
2. Click on Internal Parameters.
3. Add an internal parameter named "rdkafkaTopicProperties" give it a value of "request.required.acks=all"
Is the above good with latest 2021.11 version as well?