Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Join us in Bucharest on Sept 18th for Qlik's AI Reality Tour! Register Now

How to: Getting Started with the Qlik Talend Data Integration connector in Qlik Automate

100% helpful (3/3)
cancel
Showing results for 
Search instead for 
Did you mean: 
MarkGeurtsen
Support
Support

How to: Getting Started with the Qlik Talend Data Integration connector in Qlik Automate

Last Update:

May 23, 2025 9:23:12 AM

Updated By:

Sonja_Bauernfeind

Created date:

May 16, 2024 9:19:52 AM

Attachments

This article is intended to guide users on how to work with the Qlik Talend Data Integration connector in Qlik Automate.

Content

 

Authentication

The connector is always connected for every user. The user has identical access to data projects as in the Qlik UI.

Working with the connector

The connector has the following blocks available:

  • List Projects
  • List Data Tasks
  • Get Project
  • Get Data Task
  • Get Data Task Runtime State
  • Start Data Task
  • Stop Data Task
  • Start Tasks and wait for completion

The purpose of this connector is to allow Qlik Talend Data Integration users to create automations that orchestrate the tasks inside a data project. This provides users more control of when particular tasks need to be run, as well as allowing users to insert other tasks such as integration with third party systems based on particular conditions during the automation.

Scenario

There's a pipeline in Qlik Talend Data Integration making use of a table of orders and a table of customers.

We can run the register tasks separately as well as the storage tasks. However, the transform task combining the customer and sales data into a single table can only be done once both the storage tasks are complete.

Also, a requirement of running this pipeline is that if one of the storage tasks fails, a ticket needs to be logged on ServiceNow.

See below a sketch of the Qlik Talend Data Integration pipeline:

pipeline sketch.png

The Automation

An automation to run this pipeline can be built in the following way:

  1. Start from a new automation
  2. Open the Qlik Talend Data Integration connector in the block library and drag the List Data Tasks block on to the canvas.
  3. Configure the List Data Tasks block by clicking on the Project ID input and use the lookup to find the correct data project.

    list data tasks and project ID.png

     

  4. Perform a test run by hovering over the list data tasks block and click the green arrow for a test run. This will populate the example output.

    click TEST RUN.png

  5. In the block editor, navigate to the list blocks and drag the Filter List block below the List Data Tasks block. The List input will be automatically configured to the List Data Tasks block.
  6. In the condition, select the property type and set the comparison function to "equals". Set the value to "STORAGE" to obtain all the storage tasks.

    type equals storage.png

  7. Drag the Start Data Tasks and wait for completion block below the Filter List block. Configure the same project ID as before and in the data task ID's, select "Output from Filter List" and subsequently choose the key "type". Select the radio button for Select all type(s) from list filterList.
  8. Click on the green box containing Filter list(*) > Type and click add formula. Choose the explode formula and for delimiter provide a single comma. The Start Data Tasks and wait for completion should now look like the following:

    add data task formula.png

  9. From the block library, go to the basic blocks and drag a condition block onto the canvas. The Start Data Tasks and Wait for completion block has a property called success containing a boolean value. Configure the condition to check if this is set to false.

    start data tasks and wait for completion.png

  10. From the block library, go to the ServiceNow connector and drag the Create Incident block inside the yes condition. Here we can provide data for a ServiceNow ticket to be created, or any of the other connectors available.
  11. In the "No" condition, we can drag in the subsequent parts of the data pipeline. Drag another Start Data Tasks and Wait for completion block in the no condition and configure this to run the transform task. Repeat the process for the Data Mart task.
  12. The final automation should look like the following:

    final automation.png

Upon running this automation, we will have the desired result. An automation will run the two storage tasks in parallel and only continue when both are completed. If either one has an error, a ticket gets logged to an external system. When there are no issues, the subsequent tasks will be run as well and the pipeline is completed successfully!

Labels (2)
Version history
Last update:
‎2025-05-23 09:23 AM
Updated by: