
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Kafka endpoint - message size considerations
The most important parameter that affects the size of the message in Kafka is the BROKER parameter message.max.bytes. In other words, that is the largest record batch size allowed by Kafka (after compression if compression is enabled).
Replicate uses the library rdkafka to act as a Producer, and rdkafka has a parameter also called message.max.bytes (default value is 1000000). In the Replicate UI, under the Advanced tab of the Kafka endpoint, you can set "Message Maximum Size".
Messages are batched up by rdkafka. By default, the batch size is 10000 messages. The batch size can also be changed using the rdkafka parameter batch.size.
Both rdkafka's message.max.bytes and batch.size parameters can be changed through the Replicate internal parameter rdkafkaProperties=message.max.bytes=100000;batch.size=10000
The combination of the rdkafka parameters message.max.bytes and batch.size cannot be greater than the broker parameter message.max.bytes (otherwise, you will get the error "Kafka: Message delivery failed: Broker: Message size too large"). I.e
rdkafka( compressed (message.max.bytes x batch.size) ) <= broker(message.max.bytes).
This is tricky to figure out, as the compression result is not known upfront.
When testing and changing "Message Maximum Size" (see figure 1 above) to have extremely big values (max value is 1000000000), the "size too large" never triggers. However, if you change the rdkafka parameters message.max.bytes and batch.size to be big, at some point you will get that error message ("Kafka: Message delivery failed: Broker: Message size too large").
Thus if you are adjusting/testing the message size, we recommend changing "Message Maximum Size" in the advanced tab to avoid this error.
Note to keep in mind: Replicate always sends 1 message per individual event (that is, 1 per update, 1 per delete, 1 per insert) to the corresponding topic in Kafka.