Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi,
We are using Kafka as a target and sometimes Kafka throws timeout errors and policy violation errors (when Kafka cannot handle the load), when this happens Qlik failed to deliver the message. Is there a way to retry sending the message to Kafka when Qlik receives an error from Kafka?
Hello @gseckin ,
Thanks for reaching out to Qlik Community!
Would you please try to add an internal parameter rdkafkaTopicProperties to the Kafka target endpoint, and set its values to:
request.timeout.ms=240000;message.timeout.ms=1200000;
Hope this helps.
John.
Thanks I'll try and can we also define kafka max throughput espacially for full loads kafka can not handle the load so we need to decrease max throughput for kafka
Hi @john_wang,
is there also retry mechanism for kafka that we can use for spesific errors?
Hi @gseckin ,
In the error handling, the retry can be customized, for example:
Hope this helps.
John.
Hi @john_wang ,
thanks! And what about setting max throughput for Kafka? Can we control the max amount of data per min or sec that qlik pushes to Kafka?
Thanks
Hello @gseckin ,
No such setting in Qlik Replicate Kafka endpoint. However there are other options out of Qlik Replicate:
linger.ms: This controls the time the producer waits for more records before sending a batch. Increasing it can reduce the throughput but increase batching efficiency.batch.size: This controls the maximum size of a batch that can be sent. Reducing the batch size limits the throughput.quota.producer.default: Kafka has a built-in quota mechanism for throttling producers. This allows you to set throughput limits on a per-client or per-IP basis.If the network between your producers and Kafka broker is the bottleneck, you can throttle network bandwidth at the OS level using tools like tc on Linux.
By combining producer throttling, broker quotas, and potentially network-level restrictions, you can efficiently limit Kafka's throughput to meet your requirements.
Hope this helps.
John.