Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi there,
I'm replicating a table with a CLOB column to Kafka. The uplimit of CLOB is about 150k, and I tried 200K, 500K and 1000K for "Limit LOB size to (KB)"; but the CLOB data all got truncated in Kafka topic which has the setting: max.message.bytes = 2097164.
Thanks for help,
Richard
Hi Richard,
It is not suggestible to go with a large LOB size(>10 MB) Since the buffers are limited.
The buffer limit is up to 10 MB and it is hardcoded at the product level. As increasing the large buffers in Kafka will affect performance we restricted it to 10 MB.
If the CLOB value is less than 10 MB then you can update the same value for "Limit LOB size to (KB)" and max.message.bytes. If the value is more than 10MB then it will be truncated.
On SP06 of QR v2021.5, we added an internal property for exceeding the 10MB message size limitation when loading AVRO-formatted messages into Kafka.
In order to have that, please manage to enable 'fastAvroMaxBufferSizeMB' internal parameter with the requested message size(up to 100MB), and on the Kafka cluster installation folder under /config folder, set the below parameters at the server.properties file:
message.max.bytes= 100001200
replica.fetch.max.bytes=100001200
setting those parameters are just to manage no limitation for our internal param implementation.
Thanks,
Swathi
Thanks Swathi for your reply.
Json messages can be sent to kafka completely now, but we got a tons of "[TARGET_LOAD ]W: rdkafka error: (code=-184) 'Local: Queue full' ". The loading process is significantly slow now. By googling, some people suggest to turn Kafka producer (i.e., increase linger.ms, buffer size, etc). In our case, Kafka producer is Qlik application, and I don't see where those kafka producer config parameters can be tuned.
Thanks for help,
Richard