
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Kafka Target: Topic Authorization failed after Broker restart or failure
Hi Qlik Team and Community,
when the Kafka cluster restarts or a broker (with the Kafka metadata server) fails, the Qlik Replicate task fails with this error:
Stream component 'st_0_Kafka_ACC_v3' terminated Stream component failed at subtask 0, component st_0_Kafka_ACC_v3 Error executing command Failed to produce kafka message with record id <94941> to partition <1> in topic 'syrius.telephone.0'. Broker: Topic authorization failed Kafka: Message delivery failed: Broker: Topic authorization failed.
For some period the metadata server is not reachable, but should come up after a while again. In this case the task should keep trying to connect, but actually the task fails. We receive a configured email notification for the failure:
Mail "[ACC] Syrius_Kafka_v3 returned a non-retryable error" sent successfully (notification_manager.c:1916)
Maybe it's a definition question, but in this case the error should be recoverable / retryable.
There are some hints on this failure:
First, check and verify that all Brokers are running correctly. The error could occur if one or multiple Brokers are not running, preventing the Producer from connecting.
If everything is working just fine, increase the request.timeout.ms value as it directly affects the timeout configured by the client (producer). The client will wait for the specified time period to get a response from the server (Broker). If the client doesn't have enough time to send all requested data, it will simply time out. On the other hand, if the server has to wait longer than the specified time, it will also timeout.
Next, try to increase the value for retry.backoff.ms. The value is used to set the waiting time between the client and the server when the client tries to reattempt to connect. Giving the server more time to respond could help you fix the error.
If the producer is a third-party software or system, check the max.block.ms value because it also decides the waiting period of the client. It should be the same as the request.timeout.ms value. You can try to decrease the max.block.ms from the default setting, which is 60000.
Lastly, if nothing above helped you clear the error, make sure that all clients and servers use the same version of the software. That includes the version used to build the Project jar, the one used to test the solution, and the one you have installed on your server.
1) Did someone face this issue already and found a solution?
2) How could we add a Kafka Producer parameter (e.g. request.timeout.ms) in Qlik Replicate?
Furthermore there are some other hints
like a "AuthorizationException RetryInterval", but probably that is Spring Boot specific, since this paramter does not appear in the Kafka Doc or Confluent Doc.
3) Would "AuthorizationException RetryInterval" possibly be an option (in the future) for the Qlik Kafka Producer?
Best Regards,
Andreas
Accepted Solutions

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Here is the article which might help with your issue, please check:
How to configure Kafka producer properties? - Qlik Community - 1730055
rdkafkaProperties
queue.buffering.max.ms=1000;message.send.max.retries
rdkafkaTopicProperties
acks=all;message.timeout.ms=600000
Thank you,

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Here is the article which might help with your issue, please check:
How to configure Kafka producer properties? - Qlik Community - 1730055
rdkafkaProperties
queue.buffering.max.ms=1000;message.send.max.retries
rdkafkaTopicProperties
acks=all;message.timeout.ms=600000
Thank you,
