Skip to main content
July 15, NEW Customer Portal: Initial launch will improve how you submit Support Cases. READ MORE

How to: Getting started with the Apache Kafka connector using Rest Proxy Server in Qlik Application Automation

No ratings
Showing results for 
Search instead for 
Did you mean: 

How to: Getting started with the Apache Kafka connector using Rest Proxy Server in Qlik Application Automation

Last Update:

Nov 9, 2022 8:14:01 AM

Updated By:


Created date:

Nov 8, 2022 7:26:07 PM

This article provides an overview of the available blocks in the Apache Kafka connector in Qlik Application Automation.

Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.



Authentication to Apache Kafka happens through Basic Authentication. To use this connector, you need to provide the username and password to your Rest Proxy Server. When you connect to Apache Kafka in Qlik Application Automation you will be presented with the following screen:



Available blocks

The blocks that are available in the Apache Kafka connector focus on Producing events to topics in the Apache Kafka cluster.

AK-list of blocks.png

  • Get Broker
  • List Brokers
  • Delete Broker
  • Get Cluster
  • List Clusters
  • Create Topic
  • Get Topic
  • List Topics
  • Delete Topic
  • Produce Event

Working with the Apache Kafka blocks

Apache Kafka blocks make use of the Confluent Rest Proxy APIs to produce and consume messages from an Apache Kafka cluster.

  1. Follow this step-by-step guide to set up your Confluent Rest Proxy API. Make sure you have an active cluster before proceeding.
  2. Use the Get Cluster and List Cluster blocks to get basic information about your cluster.
  3. Create a topic in your cluster using the Create Topic block from which you will produce and consume the messages.
  4. You can make use of the Produce Event block to produce messages/data to the Topic.

Apache Kafka Use Case: Enabling Users To Perform Reverse ETL

Some companies may already use Apache Kafka for application integration challenges. Applications push and pull data from specific topics to communicate with one another. This solves the issues where each application has to talk to every other application. It is identical to what a service bus would do.


  • We can publish messages to a Kafka topic that can be picked up by a customer's operational systems.
  • It can be multiple systems at once, for example, their CRMs and ERP applications.
  • It can be a single application (might be an on-premises system and we might not have a connector).
  • Operational systems would be consumers on topics we publish to.



The information in this article is provided as-is and is to be used at your own discretion. Depending on the tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.

Tags (1)
Labels (2)
Version history
Last update:
‎2022-11-09 08:14 AM
Updated by: