Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
I have a table with a CLOB column to replicate from MS SQL db to confluent Kafka in Azure. The CLOB size can be 1MB, but when I use "Limit LOB size to (KB) = 32K" or above, the replica failed with the following error:
I know nothing specific about loading to Kafka, but in an other recent post also by @RichJ he mentions "batch.size and linger.ms for kafka target" which according to @lyka can be set through the internal parameter rdkafkaProperties. That sounds relevant. What are those values for the test?
In the Task design there might be a relevant parameter under FullLoad - Tuning: "Commit rate during full load". What is the selected value there (default 10000). If NOT default, then you can find this in the exported task json as "max_transaction_size" under "target_settings". Maybe try to reduce to for example 1000 or 1234 just to try? Any guesses as to why I might suggest 1234? Well, it is a value which does not occur in nature so to speak so if you use that then you can quickly find is back with a grep/findstr in json files. There are just too many '1000's out there 🙂
hth,
Hein
Hi @RichJ ,
Seems like the issue is with your Kafka broker server performance.
Please add the below internal parameters to your kafka endpoint:
set internal parameter resultsWaitMaxTimes=1000 and resultsWaitTimeoutMS=20000
NOTE: You can tune values based on the timeout response.
Thanks,
Swathi