Skip to main content
Qlik Introduces a New Era of Visualization! READ ALL ABOUT IT

Using Kafka with Qlik Replicate

100% helpful (3/3)
Showing results for 
Search instead for 
Did you mean: 
Digital Support
Digital Support

Using Kafka with Qlik Replicate

Last Update:

Nov 16, 2023 12:52:39 PM

Updated By:


Created date:

Nov 16, 2023 10:52:14 AM

This Techspert Talks session covers:

  • How Replicate works with Kafka
  • Kafka Terminology
  • Configuration best practices


  • 01:06 - Kafka Architecture
  • 02:36 - Setting Up Kafka Endpoint
  • 07:49 - Configuring Kafka Task
  • 10:24 - Viewing Data in Kafka
  • 12:02 - Data Update Performance
  • 12:56 - Understanding Kafka Headers
  • 13:23 - Liberty Kafka
  • 14:12 - Performance Tuning Parameters
  • 15:12 - Acks Parameter
  • 16:16 - Task Failure Demo
  • 17:23 - Q&A: Is data sent via JSON messages?
  • 18:00 - Q&A: Recommended schema registry management?
  • 18:39 - Q&A: What is best LOB size?
  • 19:27 - Q&A: How to generate SSL cert for Azure?
  • 20:07 - Q&A: Can Kafka be a Replicate source?
  • 20:21 - Q&A: How to troubleshoot Acks timeout error?
  • 20:56 - Q&A: Which is best: Gzip or Snappy?
  • 21:37 - Q&A: Does upgrading affect Kafka tasks?
  • 22:10 - Q&A: Why use Kafka as a target?
  • 22:44 - Q&A: Troubleshooting SSL certificate setup help?





Q: What are other ports for Kafka? apart from 9092, What modes 9094 and 9096 are used?

A: 9092 is the default port but exact ports must come from your Kafka Admin. Generally, Kafka ports will be configured in a sequential number like 9092, 9093 and so on.

For example, ports like 9094 and 9096 might be designated for specific types of communication within the Kafka cluster or for external communication, providing a way to separate and manage different types of traffic.

To determine the exact purpose of ports 9094 and 9096 in your Kafka deployment, you would need to refer to the Kafka configuration settings and consult with your Kafka administrator or the documentation specific to your Kafka version and setup.


Q: If error comes in Kafka as a Target, what all are the measures we can take from Kafka side to resolve?

A: The appropriate measures to address the error depend on its nature. It might be due to incorrect credentials, network bandwidth issues, or problems related to response time. If the customer is uncertain about the error, I would advise creating a support case for further assistance.


Q: What is the process for ingesting data into two distinct topics using a global rule?

A: It is not possible to write identical data to two different topics using the same task. However, you can generate dynamic topic names by utilizing $topic in the Global transformation. If you want to split records of one table to 2 topics then You can set a calculated expression in $topic.


Q: Could you elaborate on recommendations for optimizing Kafka for optimal performance, including considerations for broker counts, partition counts, replicate counts, etc.?

A: This inquiry pertains more to Kafka-level tuning rather than focusing specifically on replicate-level adjustments. Your team of Kafka experts can provide guidance tailored to your environment.


Q: What is the process for conducting throughput testing for validation?

A: Throughput testing is tailored to each environment. Common throughput testing involves identifying complex data flow systems, loading data using Qlik replicate, and tuning librdkafka parameters if necessary—with assistance from our professional team. While librdkafka offers a throughput testing utility for external testing, it is strongly recommended to engage our professional team for such assessments.


Q: Does transactional apply introduce latency due to processing messages one by one, which is the only option for Kafka?

A: Transactional apply does not entail strict one-by-one processing. In the case of choosing transactional apply, Replicate handles a message tracker array with a default size of 100,000 messages. Following the algorithm, messages are produced, mitigating concerns about overall latency. This approach is thoroughly addressed at the architecture level.


Q: Are Kafka and Confluent Kafka identical?

A: Kafka is an open-source platform that requires hosting on your on-premise servers without official support. On the other hand, Confluent Kafka offers two options: deploying it in your data centers or utilizing the SAAS version. In both scenarios, you receive support from the Confluent support team.


Q: Could you discuss the factors contributing to the lack of support for batch mode in Kafka endpoints?

A: Batch mode doesn't guarantee the order of transactions, and it is crucial to maintain transaction order in a messaging system. Using batch mode could lead to mis-ordered messages, potentially causing downstream consumers to observe incorrect data.


Click here to see video transcript

Labels (3)
Contributor III
Contributor III

Good Session 

Contributor III
Contributor III

It is noticed that before data  in the messages are not matching with actual before image values .. also noticed all records from Source PG are flowing as delete and Insert in Kafka 

Version history
Last update:
‎2023-11-16 12:52 PM
Updated by: