Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Qlik Automate is a no-code automation and integration platform that lets you visually create automated workflows. It allows you to connect Qlik capabilities with other systems without writing code. Powered by Qlik Talend Cloud APIs, Qlik Automate enables users to create powerful automation workflows for their data pipelines.
Learn more about Qlik Automate.
In this article, you will learn how to set up Qlik Automate to deploy a Qlik Talend Cloud pipeline project across spaces or tenants.
To ease your implementation, there is a template on Qlik Automate that you can customize to fit your needs.
You will find it in the template picker: navigate to Add new → New automation → Search templates and search for ‘Deploying a Data Integration pipeline project from development to production' in the search bar, and click Use template.
ℹ️ This template wil be generally available on October 1, 2025.
In this deployment use case, the development team made changes to an existing Qlik Talend Cloud (QTC) pipeline.
As the deployment owner, you will redeploy the updated pipeline project from a development space to a production space where an existing pipeline is already running.
To reproduce this workflow, you'll first need to create:
Using separate spaces and databases ensures a clear separation of concerns and responsibilities in an organization, reduces the risk to production pipelines while the development team is working on feature changes.
Workflow steps:
ℹ️ Note: This is a re-deployment workflow. For initial deployments, create a new project prior to proceeding with the import.
Use the 'Export Project' block to call the corresponding API, using the ProjectID.
This will download your DEV project as a ZIP file. In Qlik Automate, you can use various cloud storage options, e.g. OneDrive. Configure the OneDrive 'Copy File on Microsoft OneDrive' block to store it at the desired location.
To avoid duplicate file names (which may casue the automation to fail) and to easily differentiate your project exports, use the 'Variable' block to define a unique prefix (such as dateTime).
From the 'Qlik Talend Data Integration' connector, use the 'Get Project Binding' block to call the API endpoint.
The 'bindings' are project variables that are tied to the project and can be customized for reuse in another project. Once you test-run it, store the text response for later use from the 'History' tab in the block configuration pane on the right side of the automation canvas:
We will now use the 'bindings' from the previous step as a template to adjust the values for your PROD pipeline project, before proceeding with the import.
From the automation, use the 'Update Project Bindings' block. Copy the response from the 'Get Project Binding' block into the text editor and update the DEV values with the appropriate PROD variables (such as the source and target databases). Then, paste the updated text into the Variables input parameter of the 'Update Project Binding' block.
ℹ️ Note: these project variables are not applied dynamically when you run the 'Update Bindings' using the Qlik Automate block. They are appended and only take effect when you import the project.
For a Change Data Capture (CDC) project, you must stop the project before proceeding with the import.
Use the 'Stop Data Task' block from the 'Qlik Talend Data Integration' connector. You will find the connectors in the Block Library pane on the left side of the automation canvas.
Fill in the ProjectID and TaskID:
ℹ️ We recommend using a logic with variables to handle task stopping in the automation. Please refer to the template configuration and customize it to your needs.
You’re now ready to import the DEV project contents into the existing PROD project.
⚠️ Warning: Importing the new project will overwrite any existing content in the PROD project.
Using the OneDrive block and the 'Import Project' blocks, we will import the previously saved ZIP file.
ℹ️ In this template, the project ID is handled dynamically using the variable block. Review and customize this built-in logic to match your environment and requirements.
After this step is completed, your project is now deployed to production.
It is necessary to prepare your project before restarting it in production. Preparing ensures it’s ready to be run by creating or recreating the required artifacts (such as tables, etc).
The 'Prepare Project' block uses the ProjectID to prepare the project tasks by using the built-in project logic. You can also specify one or more specific tasks to prepare using the 'Data Task ID' field. In our example, we are reusing the previously set variable to prepare the same PROD project we just imported.
If your pipeline is damaged, and you need to recreate artifacts from scratch, enable the 'Allow recreate' option. Caution: this may result in data loss.
Triggering a 'Prepare' results in a new 'actionID'. This ID is used to query the action status via the 'Get Action Status' API block in Qlik Automate. We use an API polling strategy to check the status at a preset frequency.
Depending on the number of tables, the preparation can take up to several minutes.
Once we get the confirmation that the preparation action was 'COMPLETED', we can move on with restarting the project tasks.
If the preparation fails, you can define an adequate course of action, such as creating a ServiceNow ticket or sending a message on a Teams channel.
ℹ️ Tip: Review the template's conditional blocks configuration to handle different preparation statuses and customize the logic to fit your needs.
Now that your project is successfully prepared, you can restart it in production.
In this workflow, we use the 'List Data Tasks' to filter on 'landing' and 'storage' for the production project, and restart these tasks automatically.
All done: your production pipeline has been updated, prepared, and restarted automatically!
Now it’s your turn: fetch the Qlik Automate template from the template library and start automating your pipeline deployments.
Start a Qlik Talend Cloud® trial
How to get started with the Qlik Talend Data Integration blocks in Qlik Automate
Stitch only offers account consolidation at the billing level.
If you have multiple Stitch accounts, you can reach out to the Sales team to help you consolidate your accounts so that you only get billed once for all accounts.
However, the accounts themselves cannot be merged, i.e., integrations and destinations setup in one account cannot be migrated to another. If you prefer to avoid managing multiple accounts, you will need to setup your workflows in your preferred account, and this will include historical loads for newly setup integrations.
Please note that Stitch offers a 7-day exemption for newly named integrations. This implies that if you setup an integration with a schema name that was not previously used in the account, it will be regarded as a new integration. Schema names are verified against individual Stitch accounts only.
If you intend to manage multiple Stitch accounts and need to add the same team members in each account, please use the naming convention outlined here to add new members.
The Stitch Free Trial offers the full Premium functionality of Stitch for 14 days. This time is provided for users to setup and test their use-case for data replication.
However, there are instances where 14 days is not enough for a thorough evaluation and the Qlik team is able to accommodate such scenarios.
If you require a trial extension, please contact Qlik Support for assistance. You free trial can be extended up to 7 days via the Support team. Please note this is a one-time courtesy for our users.
Once an account is extended, you may need to refresh the browser for the change to take effect. We hope that you are then able to select a plan that meets your needs.
Advanced Connectivity is available upon request for paying customers only.
NPrinting has a library of APIs that can be used to customize many native NPrinting functions outside the NPrinting Web Console.
An example of two of the more common capabilities available via NPrinting APIs are as follows
These and many other public NPrinting APIs can be found here: Qlik NPrinting API
In the Qlik Sense data load editor of your Qlik Sense app, two REST connections are required (These two REST Connectors must also be configured in the QlikView Desktop application>load where the API's are used. See Nprinting Rest API Connection through QlikView desktop)
Requirements of REST user account:
Creating REST "GET" connections
Note: Replace QlikServer3.domain.local with the name and port of your NPrinting Server
NOTE: replace domain\administrator with the domain and user name of your NPrinting service user account
Creating REST "POST" connections
Note: Replace QlikServer3.domain.local with the name and port of your NPrinting Server
NOTE: replace domain\administrator with the domain and user name of your NPrinting service user account
Ensure to enter the 'Name' Origin and 'Value' of the Qlik Sense (or QlikView) server address in your POST REST connection only.
Replace https://qlikserver1.domain.local with your Qlik sense (or QlikView) server address.
Ensure that the 'Origin' Qlik Sense or QlikView server is added as a 'Trusted Origin' on the NPrinting Server computer
NOTE: The information in this article is provided as-is and to be used at own discretion. NPrinting API usage requires developer expertise and usage therein is significant customization outside the turnkey NPrinting Web Console functionality. Depending on tool(s) used, customization(s), and/or other factors ongoing, support on the solution below may not be provided by Qlik Support.
This article explains how the Qlik Sense app button component can be used to send custom parameters directly to the automation without requiring a temporary bookmark. This can be useful when creating a writeback solution on a big app as creating and applying bookmarks could take a bit longer for big apps which adds delays to the solution. More information on the native writeback solution can be found here: How to build a native write back solution.
Contents
If you want to limit this to a specific group of users, you can leave the automation in Manual run mode and place it in a shared space that this group of users can access. More information about this is available here: Introducing Automation Sharing and Collaboration. Make sure to disable the Run mode: triggered option in the button configuration.
Environment
The information in this article is provided as-is and will be used at your discretion. Depending on the tool(s) used, customization(s), and/or other factors, ongoing support on the solution below may not be provided by Qlik Support.
This article provides an overview of how to send straight table data to Microsoft Teams as a table using Qlik Automate.
The template is available on the template picker. You can find it by navigating to Add new -> New automation -> Search templates, searching for 'Send straight table data to Microsoft Teams as a table' in the search bar, and clicking the Use template option.
You will find a version of this automation attached to this article: "Send-straight-table-data-to-Microsoft-Teams-as-a-table.json".
Content:
The following steps describe how to build the demo automation:
An example output of the table sent to the Teams channel:
The information in this article is provided as-is and will be used at your discretion. Depending on the tool(s) used, customization(s), and/or other factors, ongoing support on the solution below may not be provided by Qlik Support.
This article details the process of removing Cloud Engine from your environment and deallocating the tokens consumed by Cloud Engine. Please follow these steps:
Note: Executing tasks on a shared (unassigned) Cloud Engine will consume engine tokens.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
When larger tables participate in the Full Load replication phase in Qlik Replicate, you can accelerate the replication of these tables by splitting them into logical segments and loading these segments in parallel. Tables can be segmented by data ranges, by partitions, or by sub-partitions.
This article provides guidance in finding the adequate data segment boundaries based on actual data stored in those tables on the source.
As a first step, it is essentially to understand the size of the data tables.
Not every long table is big, nor not every short table is small. Table average uncompressed data size is determined by the formula below:
“table_size_MB” = “number of records in the table” * “avg. record length”
The table size may directly affect the unload duration. The longer is the unload, the bigger impact it will have on the following:
It is a good practice to unload the source table in “digestible” chunks that could be loaded optimally in less than 30 minutes. The amount of data in each segment can vary between tens of MB and few GBs and will heavily depend on the network quality and bandwidth, source database load at any given time, source database processing capability and other environmental aspects. The best way to gauge the overall Full Load performance is to test how long it takes to Full Load for a medium size table in your environment, and extrapolate from there.
Each segment should contain a reasonable number of records. Depending on the table width, the “reasonable” number of records in each batch may vary. As a rule of thumb, we would want each partition\segment load duration to be in a ballpark of 15-30 minutes.
This is important for the following reasons:
At this point we assume that we already know the approx. number of logical “partitions” (or segments) we would want to use for parallel unload. Perform the following to calculate the partition size:
When identifying the data ranges for SAP tables, MANDT field should appear in the subqueries WHERE clause.
ORACLE:
SELECT T1.GEN1, T1.THIS_ROW |
DB2i / DB2LUW / DB2z:
SELECT T1.GEN1, T1.THIS_ROW |
SQL SERVER:
SELECT T1.THIS_ROW, T1.GEN1 |
Once the query results are returned, tasks will need to be set up consistently in all the environments. Note that in non-Prod environments the data segments will be identical to the PROD one, hence if any testing needs to be done, values should be adjusted specifically to the environment the task is running in.
Qlik Replicate - Table Parallel Load Settings
6. Click “Select Segment Columns” and pick the columns you are going to segment the table by and click OK. By default, the PK columns will be automatically selected
Qlik Replicate - Table Parallel Load - Select Segmentation Columns
For SAP application(DB) endpoints the MANDT field is implicitly taken from the endpoint CLIENT field and will not appear in the column selection criteria.
7. Populate the segments with the ranges returned by the queries in ascending order and click “Validate”:
Qlik Replicate - Validating Data Segments settings
Validate” button only checks that all the input fields have been populated and that the data in those fields matches the field data type.
8. Correct any error that may be discovered and click “OK”.
Segments’ boundaries should be in an always-ascending order. Failure to comply may result in inconsistent data unload generating duplicate records and/or missing data.
This article references two options for filtering the last 90 days worth of data on a date column in Qlik Replicate.
Additionally, the goal is to show a working example of full load pass thru vs record selection condition so users can adjust filters to different types or conditions.
The full load pass thru filter is more efficient since it filters directly on the source. For this to work, you need to use the exact syntax of the source database. It can only be applied to full load tasks, not change data capture mode.
Example with Oracle source where START_TRAN_DATE is of type DATE:
START_TRAN_DATE > sysdate - 90
To open with Full load Passthru available:
If you see the error:
Table 'x' cannot be reloaded because a passthrough filter is defined for it. Passthrough filters allow task designers to control SQL statements executed on source database tables during replication. To continue using passthrough filters, you must explicitly set "enable_passthrough_filter" to true in the "C:\Program Files\Attunity\Replicate\bin\repctl.cfg" file. Otherwise, remove the passthrough filter from this table and any other tables defined with passthrough filters. [1020439] (endpointshell.c:3716)
Then:
This option can be used for full load and change data capture task but is not as efficient for large tables as full load passthru. It does not have to be the exact syntax of the source, i.e the example below can work on many sources. This is because the filter condition function is an SQLite function within Replicate.
An example where START_TRAN_DATE is of type DATE
$START_TRAN_DATE >= DateTime('Now', 'LocalTime', '-90 Day')
Filter for last 90 days part two with postgres example
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
This capability has been rolled out across regions over time:
With the introduction of shared automations, it is now possible to create, run, and manage automations in shared spaces.
Limit the execution of an automation to specific users.
Every automation has an owner. When an automation runs, it will always run using the automation connections configured by the owner. Any Qlik connectors that are used will use the owner's Qlik account. This guarantees that the execution happens as the owner intended it to happen.
The user who created the run, along with the automation's owner at run time, are both logged in the automation run history.
These are five options on how to run an automation:
Collaborate on an automation through duplication.
Automations are used to orchestrate various tasks; from Qlik use cases like reload task chaining, app versioning, or tenant management, to action-oriented use cases like updating opportunities in your CRM, managing supply chain operations, or managing warehouse inventories.
To prevent users from editing these live automations, we're putting forward a collaborate through duplication approach. This makes it impossible for non-owners to change an automation that can negatively impact operations.
When a user duplicates an existing automation, they will become the owner of the duplicate. This means the new owner's Qlik account will be used for any Qlik connectors, so they must have sufficient permissions to access the resources used by the automation. They will also need permissions to use the automation connections required in any third-party blocks.
Automations can be duplicated through the context menu:
As it is not possible to display a preview of the automation blocks before duplication, please use the automation's description to provide a clear summary of the purpose of the automation:
The Automations Activity Centers have been expanded with information about the space in which an automation lives. The Run page now also tracks which user created a run.
Note: Triggered automation runs will be displayed as if the owner created them.
The Automations view in Administration Center now includes the Space field and filter.
The Runs view in Administration Center now includes the Executed by and Space at runtime fields and filters.
The Automations view in Automations Activity Center now includes Space field and filter.
Note: Users can configure which columns are displayed here.
The Runs view in the Automations Activity Center now includes the Space at runtime, Executed by, and Owner fields and filters.
In this view, you can see all runs from automations you own as well as runs executed by other users. You can also see runs of other users's automations where you are the executor.
To see the full details of an automation run, go to Run History through the automation's context menu. This is also accessible to non-owners with sufficient permissions in the space.
The run history view will show the automation's runs across users, and the user who created the run is indicated by the Executed by field.
The metrics tab in the automations activity center has been deprecated in favor of the automations usage app which gives a more detailed view of automation consumption.
Question
I need to read data from a DB2 database and the field type is defined as CHAR () FOR BIT DATA. When I create the connection in Talend metadata and try and view the data, it appears as HEX. Using something like DBeaver I can see the data. How can I get Talend to read the data correctly?
Tools like DBeaver automatically cast data types. To get the same result before processing in a component, add this to your SQL statement:
SELECT CAST(your_column AS VARCHAR(100) CCSID 37) AS utf8_col FROM your_table
CCSID 37 is US EBCDIC, used by IBM AS/400.
See a table here:
https://www.cs.umd.edu/~meesh/cmsc311/clin-cmsc311/Lectures/lecture6/ebcdic.html
To resolve login issues with your Qlik Stitch account:
Should this not resolve the issue, please do not hesitate to contact our Support team.
To prevent account lockout, refrain from submitting multiple password reset requests within a single day.
If you are on the premium plan, it implies that you can use the connection options available as part of a Premium plan, which includes below options:
In this article, you will find details on AWS PrivateLink and how it can be used within Stitch:
AWS PrivateLink exposes a network interface from one AWS account into another, enabling cross-account networking that stays within AWS’s private network. Stitch can only establish connections to databases and data warehouses hosted in AWS through PrivateLink within the same AWS regions that Stitch operates (us-east-1 and eu-central-1).
To set up AWS Private Link, the user must provide the following:
CIDR block(s) in on their network (ex: 10.1.2.0/28, 10.2.2.0/28)
Service name for the VPC endpoint (ex : com.amazonaws.vpce.us-east-1.vpce-svc-0626d1982ea6ca5a7)
Below is the process for establishing a PrivateLink connection:
The user follows this AWS PrivateLink guide (https://docs.aws.amazon.com/vpc/latest/privatelink/privatelink-share-your-services.html) to create an endpoint service (in this use case the customer is the “service provider” and Stitch is the “service consumer”).
Note: endpoint services must be backed by AWS Network Load balancers and cannot be pointed directly to Amazon RDS instances.
The user grants permissions to Stitch’s AWS account to consume the endpoint service.
Stitch creates an endpoint interface to the service, and the user approves the connection.
If you have any questions, please contact Support.
This article describes how to resolve the NPrinting connection verification error:
x Qlik NPrinting webrenderer can reach Qlik Sense hub error
This article is intended to get started with the Microsoft Outlook 365 connector in Qlik Application Automation.
To authenticate with Microsoft Outlook 365 you create a new connection. The connector makes use of OAuth2 for authentication and authorization purposes. You will be prompted with a popup screen to consent a list of permissions for Qlik Application Automation to use. The Oauth scopes that are requested are:
The scope of this connector has been limited to only sending emails. Currently, we do not enable sending email attachments and are looking to provide this functionality in the future. The suggested approach is to upload files to a different platform, e.g. Onedrive or Dropbox and create a sharing link that can be included in the email body.
The following parameters are available on the Send Email block:
As we do not currently support email attachments, we need to first generate a sharing link in Onedrive or an alternative file sharing service. The following automation shows how to generate a report from a Qlik Sense app, upload the report to Microsoft Onedrive, create a sharing link and send out an email with the sharing link in the body. This automation is also attached as JSON in the attachment to this post.
Feature requests are submitted to Qlik through our Ideation program, which is accessible via the Qlik Ideation Portal and is available for registered Qlik customers.
What would be applicable as a feature request for Qlik Stitch?
Certain fields you need are available through specific integrations, but are not currently supported by Stitch.
You're looking for additional flexibility or functionality in Stitch that isn’t yet available.
If either of these applies to you, we’d love to hear from you!
For instructions on submitting an idea or proposing an improvement, see How To Submit an Idea or Propose and Improvement For Qlik Products.
If you notice certain fields not receiving data from a specific date onward, or if data replication unexpectedly stops for some fields, it may indicate a potential data discrepancy issue. However, before diving into investigation or troubleshooting, it's crucial to first confirm whether a discrepancy actually exists.
To confirm whether a discrepancy exists, you will need to check the following things.
Stitch will create the _sdc_primary_keys even if none of the tables in the integration have a Primary Key. Primary Key data will be added to the table when and if a table is replicated that has a defined Primary Key. This means it’s possible to have an empty _sdc_primary_keys table.
If you are not in all of above situations, please feel free to contact Support for further investigation or troubleshooting.
To follow the standard procedures for data discrepancy issues, please kindly provide the relevant information according to our document here:
data-discrepancy-troubleshooting-guide
You can also click Copy link - this will show a warning that the bookmark needs to be published. When confirming the warning, the bookmark will be automatically published.
If the share option is not available or if users who should see the bookmarks cannot find it, verify that Sense has not been set up with Security Rules that disallow sharing or access to specific objects. See the attached document for details.