Skip to main content
Announcements
July 15, NEW Customer Portal: Initial launch will improve how you submit Support Cases. READ MORE

How to: Getting Started with the Qlik Cloud Data Integration connector in Qlik Application Automation

100% helpful (2/2)
cancel
Showing results for 
Search instead for 
Did you mean: 
MarkGeurtsen
Support
Support

How to: Getting Started with the Qlik Cloud Data Integration connector in Qlik Application Automation

Last Update:

May 16, 2024 9:20:46 AM

Updated By:

Sonja_Bauernfeind

Created date:

May 16, 2024 9:19:52 AM

Attachments

This article is intended to guide users on how to work with the Qlik Cloud Data Integration connector in Automations.

Content

 

Authentication

The connector is always connected for every user. The user has identical access to data projects as in the Qlik UI.

Working with the connector

The connector has the following blocks available:

  • List Projects
  • List Data Tasks
  • Get Project
  • Get Data Task
  • Get Data Task Runtime State
  • Start Data Task
  • Stop Data Task
  • Start Tasks and wait for completion

The purpose of this connector is to allow Qlik Cloud Data Integration users to create automations that orchestrate the tasks inside a data project. This provides users more control of when particular tasks need to be run, as well as allowing users to insert other tasks such as integration with third party systems based on particular conditions during the automation.

Scenario

There's a pipeline in Qlik Cloud Data Integration making use of a table of orders and a table of customers.

We can run the register tasks separately as well as the storage tasks. But the transform task combining the customer and  sales data into a single table can only be done once both the storage tasks are complete.

Also a requirement of running this pipeline is that if one of the storage tasks fails, a ticket needs to be logged on ServiceNow.

See below a sketch of the Qlik Cloud Data Integration pipeline:

pipeline sketch.png

The Automation

An automation to run this pipeline can be built in the following way:

  1. Start from a new automation
  2. Open the Qlik Cloud Data Integration connector in the block library and drag the List Data Tasks block on to the canvas.
  3. Configure the List Data Tasks block by clicking on the Project ID input and use the lookup to find the correct data project.

    ProjectId input.png

     

  4. Perform a test run by hovering over the list data tasks block and click the green arrow for a test run. This will populate the example output.

    test run.png

  5. In the block editor, navigate to the list blocks and drag the Filter List block below the List Data Tasks block. The List input will be automatically configured to the List Data Tasks block.
  6. In the condition, select the property type and set the comparison function to "equals". Set the value to "STORAGE" to obtain all the storage tasks.

    set condition to storage.png

  7. Drag the Start Data Tasks and wait for completion block below the Filter List block. Configure the same project ID as before and in the data task ID's, select "Output from Filter List" and subsequently choose the key "type". Select the radio button for Select all type(s) from list filterList.
  8. Click on the green box containing Filter list(*) > Type and click add formula. Choose the explode formula and for delimiter provide a single comma. The Start Data Tasks and wait for completion should now look like the following:

    add data task formula.png

  9. From the block library, go to the basic blocks and drag a condition block on to the canvas. The Start Data Tasks and Wait for completion block has a property called success containing a boolean value. Configure the condition to check if this is set to false.

    set condition to false.png

  10. From the block library, go to the ServiceNow connector and drag the Create Incident block inside the yes condition. Here we can provide data for a ServiceNow ticket to be created, or any of the other connectors available.
  11. In the "No" condition, we can drag in the subsequent parts of the data pipeline. Drag another Start Data Tasks and Wait for completion block in the no condition and configure this to run the transform task. Repeat the process for the Data Mart task.
  12. The final automation should look like the following:

    full automation example.png

Upon running this automation, we will have the desired result. An automation will run the two storage tasks in parallel and only continue when both are completed. If either one has an error, a ticket gets logged to an external system. When there are no issues, the subsequent tasks will be run as well and the pipeline is completed successfully!

Labels (2)
Version history
Last update:
‎2024-05-16 09:20 AM
Updated by: