Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Qlik and ServiceNow Partner to Bring Trusted Enterprise Context into AI-Powered Workflows. Learn More!
cancel
Showing results for 
Search instead for 
Did you mean: 
RichJ
Contributor III
Contributor III

Reached Time out while waiting for acks from kafka

I have a table with a CLOB column to replicate from MS SQL db to confluent Kafka in Azure.  The CLOB size can be 1MB, but  when I use "Limit LOB size to (KB) = 32K" or above, the replica failed with the following error:

Task 'KAFKA_TGT_MS_SRC' encountered a fatal error (repository.c:5794)
00014612: 2022-02-07T11:06:56 [TARGET_LOAD ]E: Reached Time out while waiting for acks from kafka. [1020401] (queue_utils.c:158)
00014612: 2022-02-07T11:06:56 [TARGET_LOAD ]E: Handling End of table 'dbo'.'Source_data' loading failed by subtask 1 thread 1 [1020401] (endpointshell.c:2977)
00014612: 2022-02-07T11:06:56 [TARGET_LOAD ]E: Error executing data handler [1020401] (streamcomponent.c:1998)
 
The above error won't show up if using "Limit LOB size to (KB) = 16K" ; however, CLOB column data was truncated even with compression.
 
Thanks for help,
Richard
2 Replies
Heinvandenheuvel
Specialist III
Specialist III

I know nothing specific about loading to Kafka, but in an other recent post also by @RichJ  he mentions "batch.size and linger.ms for kafka target" which according to @lyka can be set through the internal parameter rdkafkaProperties. That sounds relevant. What are those values for the test?

In the Task design there might be a relevant parameter under FullLoad - Tuning: "Commit rate during full load". What is the selected value there (default 10000). If NOT default, then you can find this in the exported task json as "max_transaction_size" under "target_settings". Maybe try to reduce to for example 1000 or 1234 just to try? Any guesses as to why I might suggest 1234? Well, it is a value which does not occur in nature so to speak so if you use that then you can quickly find is back with a grep/findstr in json files. There are just too many '1000's out there 🙂

hth,

Hein

SwathiPulagam
Support
Support

Hi @RichJ ,

 

Seems like the issue is with your Kafka broker server performance.

Please add the below internal parameters to your kafka endpoint:

set internal parameter resultsWaitMaxTimes=1000 and resultsWaitTimeoutMS=20000

 

NOTE: You can tune values based on the timeout response.

 

Thanks,

Swathi