Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
This article provides details about the new permissions available for Private automations, which will replace the previous Automation Creator role.
You can assign these permissions to a custom role or the User Default. More information about creating & managing custom roles is available here.
The existing Automation Creator role will be deprecated later in 2026.
These permissions allow users to run and create automations in their personal space.
Earlier in 2025, we introduced the shared automation permission that allows users to run, create, and manage automations in shared spaces in which they have the correct permissions. More information about which space roles are required for the various automation actions is available here:
Understanding and managing your Qlik Cloud subscription consumption is essential for maintaining predictable costs, ensuring uninterrupted service, and optimizing resource allocation across your organization. This guide provides you with the tools, strategies, and best practices to gain complete visibility into how your subscription is being consumed and implement proactive controls to stay within your capacity limits.
While Qlik Cloud measures consumption at the tenant level, you can achieve effective governance through strategic monitoring, automated alerting, and space-based management practices. This guide will walk you through the monitoring tools available, how to automate their deployment and refresh, and practical approaches to tracking consumption patterns and implementing controls that align with your organizational needs.
Content:
The Administration activity center Home page provides your first line of visibility into capacity consumption. Understanding what this view offers and how it complements the detailed monitoring apps will help you build an effective monitoring strategy.
Navigate to the Administration activity center → Home to see a real-time dashboard summarizing capacity consumption. This view displays visual bar charts for consumption metrics relevant to your subscription:
Common metrics displayed:
Additional metrics may include Data Moved, Large App consumption, Qlik Predict deployed models, and others, depending on your subscription.
Metrics appear dynamically as features are adopted. If no one has asked an assistant question yet, that metric won't display until first use, keeping the dashboard focused on what you're consuming.
The Data for Analysis chart shows a current snapshot with the last update timestamp. Most metrics update multiple times per hour, providing near real-time visibility into your consumption position.
The Administration activity center provides high-level consumption visibility designed for rapid assessment. For detailed analysis, investigation, and proactive monitoring, you'll complement this view with a set of monitoring apps.
Daily quick check (2 minutes):
When you need more detail, the Home page tells you what is being consumed. The monitoring apps tell you who, where, when, and why. Capacity subscriptions should use the Data Capacity Reporting App as the source of truth, while the Qlik Cloud Monitoring apps can be treated as estimated consumption reports, for example:
Use the Home page for daily checks and status awareness. When consumption requires attention or you need to understand trends, drill into the appropriate monitoring app for detailed analysis.
For more information, see Monitoring resource consumption.
The Data Capacity Reporting App is your official, billable record of consumption for capacity-based subscriptions. This Qlik-supported application is generated once per day (morning Central European Time) and provides the definitive view of your consumption against your entitlement.
The app tracks eight key value meters across the current and previous two months:
This app represents your billable consumption record. The data in this app is what Qlik uses for official capacity reporting and billing purposes. When there's any discrepancy between this app and other monitoring sources, the Data Capacity Reporting App is the authoritative source. This app refreshes only once daily, meaning you see yesterday's official position, not real-time consumption. For more frequent monitoring and estimated usage, you'll complement this with the Qlik Cloud Monitoring Apps.
For detailed information, see Monitoring detailed consumption for capacity-based subscriptions.
Rather than manually distributing the consumption app from the Administration activity center each day, automate this process using the Capacity consumption app deployer template in Qlik Automate.
Setup steps:
This automation creates or uses designated spaces, imports the latest version, publishes it to a managed space, and maintains version history according to your configuration. You now have a single source of truth that updates automatically each day. Create automations or alerts on the published app for automated insights.
For complete details, see the Qlik Community article: Automate deployment of the Capacity consumption app with Qlik Automate.
While the official consumption report updates once daily, the Qlik Cloud Monitoring Apps (community-supported) can be reloaded multiple times per day up to your contractual reload limits, giving you more timely estimated usage insights.
The Qlik Cloud Monitoring Apps provide estimated consumption data that may differ slightly from the official Data Capacity Reporting App. Use these apps for trend monitoring, troubleshooting, and proactive management, but always refer to the Data Capacity Reporting App for official billable consumption figures.
Particularly valuable monitoring apps include:
App Analyzer: Provides comprehensive application usage and operational analytics, including:
Automation Analyzer: Provides detailed analysis of automation runs, including:
Reload Analyzer: Tracks data refresh activity, including:
Access Evaluator: Analyzes user roles, access, and permissions across your tenant
Report Analyzer: Tracks report generation, including:
Entitlement Analyzer: For user-based subscriptions, provides insights into:
For a complete list of available monitoring apps, see the Qlik Community article: The Qlik Sense Monitoring Applications for Cloud and On-Premise.
The Qlik Cloud Monitoring Apps deployer template simplifies installation and maintenance of these community apps.
What it handles:
Reload frequency considerations: You can reload these monitoring apps multiple times per day to get more current estimated usage data. However, each reload counts against your tenant's reload capacity limits. Consider your contractual limits when scheduling. For most organizations, reloading 2-4 times per day provides a good balance between timely insights and consumption.
For complete implementation details, see the Qlik Community guide: Qlik Cloud Monitoring Apps Workflow Guide.
The monitoring apps are also available on GitHub: qlik-oss/qlik-cloud-monitoring-apps.
Effective governance comes from monitoring consumption at multiple levels and implementing proactive interventions. Here's how to approach monitoring for key consumption metrics.
Automation runs are counted across all automations in your tenant, regardless of owner or run mode (manual, scheduled, triggered, webhook, API). Test runs within the automation editor also count toward your limit.
What to monitor:
Tenant level:
Space level:
Automation level:
User level:
Example alert scenario: Using the Automation Analyzer, create alerts when:
Data for Analysis is measured by monthly peak usage. A single day's spike can impact your entire month's consumption.
This data is only available via the Data Consumption report; it is a lagging metric and currently lacks customer data such as app names, user names, and space names. As such, use of an automation template to provide notifications may be preferable to standard alerts, and some app size metrics may be better analyzed in the reload analyzer.
What to monitor:
Tenant level:
App level:
Space level:
Example alert scenario: Using the Data Capacity Reporting App and Reload Analyzer:
Each subscription tier has limits on maximum concurrent reloads, and capacity subscriptions have daily reload counts. Exceeding concurrent limits causes queuing; exceeding daily limits can block operations.
What to monitor:
Tenant level:
Space level:
App level:
Example alert scenario: Using the Reload Analyzer:
Report generation counts vary by subscription tier, with add-on packs available for purchase. Across all reporting capabilities, tenants have a maximum of 30,000 reporting-related requests per day.
What to monitor:
Tenant level:
Report task level:
Example alert scenario: Using consumption reporting and monitoring apps:
For detailed information on report limits, see Qlik Reporting Service specifications and limitations.
While Qlik Cloud measures consumption at the tenant level, you can implement effective governance practices that provide meaningful control over resource usage.
Make users aware of the impacts of their consumption and empower them to monitor their own usage.
Implementation:
Create early warning systems that trigger well before official capacity notifications.
Implementation:
Alert tier 1 (60-70% of capacity):
Alert tier 2 (75-85% of capacity):
Alert tier 3 (90%+ of capacity):
Use strict space controls to prevent development consumption from impacting production limits, or procure a development subscription from Qlik to fully isolate capacity.
Implementation:
For information on subscription types and capacity planning, see Qlik Cloud capacity-based subscriptions.
Now that you have the monitoring apps deployed and refreshed regularly, you can leverage Qlik Cloud's built-in alerting and distribution capabilities to create a proactive monitoring system. These tools transform static consumption data into actionable intelligence that reaches the right people at the right time.
Data Alerts: Create threshold-based alerts that evaluate conditions on a schedule and notify recipients when conditions are met. Alerts can be created on any chart or measure in your monitoring apps and can be shared with users or groups. Inclusive in all plans.
Subscriptions: Schedule automatic distribution of charts, sheets, or entire apps to users via email or Microsoft Teams. Subscriptions ensure stakeholders receive regular consumption reports without needing to log into Qlik Cloud. Inclusive in all plans.
In-app monitoring: Create bookmarks and sheets in the monitoring apps that focus on specific consumption areas. Share these bookmarks with space owners or functional teams so they can self-service their consumption monitoring. Inclusive in all plans.
Automations: Build custom workflows that trigger actions based on consumption thresholds, such as sending notifications through Slack, creating tickets in ServiceNow, or disabling specific automations when limits are approached. Value-add feature, if third-party connectors are used.
Creating Data Alerts:
Sum(AutomationRuns) > 4000)Creating Subscriptions:
Creating In-App Bookmarks:
Creating Automations:
All of these tools support distribution to groups, making it easy to ensure the right teams have visibility into the consumption metrics relevant to them. Space administrators can receive alerts about their space consumption, development teams can get daily subscription reports, and executive stakeholders can receive monthly summary reports.
The following examples demonstrate how to set up comprehensive monitoring for different consumption metrics. These examples assume you have deployed the Capacity consumption app deployer (running daily around midday UTC) and the Qlik Cloud Monitoring Apps deployer (running overnight) with default settings.
Explore the apps to discover a wide range of operational metrics you can monitor, alert, automate, and subscribe to.
Scenario: Your organization uses third-party automation blocks (such as Slack, ServiceNow, or Salesforce connectors), which incur additional costs based on consumption. You need to monitor third-party automation runs to prevent unexpected charges and identify which automations are driving costs.
Navigate to the Automation Analyzer and create the following alerts:
Alert 1: Third-party runs approaching limit
Alert 2: Individual user excessive third-party runs
Automation - Automation usage notifier: Automation or user email notifications
This approach allows you to send email notifications or take action directly on executing users or owners, while sending a fully customised template to notify them that they are approaching limits.
See Automation Usage Notifier | GitHub for details.
Scenario: Your Data for Analysis consumption is measured by monthly peak usage. You need early warning when daily peaks are trending upward and visibility into which apps are driving consumption.
Step 1: Create peak usage alerts in the Data Capacity Reporting App
Alert 1: Warning capacity threshold
Alert 2: Critical capacity threshold
Step 2: Create a weekly trend subscription
In the Data Capacity Reporting App:
Scenario: You want to create a comprehensive monthly review package that combines official billable data with estimated usage trends to facilitate informed capacity planning discussions.
Create a Qlik Automate automation that runs on the first business day of each month:
The key to managing Qlik Cloud consumption effectively is shifting from reactive (waiting for 80%/90%/100% notifications) to proactive (continuous monitoring with early intervention).
This week:
This month:
Ongoing:
By combining automated monitoring through the official Data Capacity Reporting App and community monitoring apps, tiered alerts, clear governance policies, and proactive intervention workflows, you can effectively manage your subscription costs and maintain predictable, controlled consumption across your organization.
Qlik Help documentation:
Qlik Community Official Support Articles:
Developer resources:
The Qlik Cloud Monitoring Apps are community-supported and provided as-is. They are not officially supported by Qlik, though they are maintained through Qlik's Open-Source Software GitHub. The Capacity consumption app deployer and Qlik Cloud Monitoring Apps deployer are supported automation templates found in the template picker catalog.
When setting up a Microsoft SQL Server Always On Availability Group (AG) along with a Windows Failover Cluster, are there any additional SQL Server–side configurations or Talend-specific database settings required to run Talend Job against a MSSQL Always On database?
Talend Job need be adapted at the JDBC connection level to ensure proper failover handling and connection resiliency, by setting relevant parameters in the Additional JDBC Parameters field.
Talend should connect to SQL Server using either the Availability Group Listener (AG Listener) DNS name or the Failover Cluster Instance (FCI) virtual network name, and include specific JDBC connection parameters.
Sample JDBC Connection URL:
jdbc:sqlserver://<AG_Listener_DNS_Name>:1433;
databaseName=<Database_Name>;
multiSubnetFailover=true;
loginTimeout=60
Replace and with your actual values. Unless otherwise configured, Port 1433 is the default SQL Server port.
multiSubnetFailover=true
Enables fast reconnection after AG failover and is mandatory for multi-subnet or DR-enabled AG environments.
applicationIntent=ReadWrite (optional, usage-dependent)
Ensures write operations are always routed to the primary replica.
Valid values:
ReadWrite
ReadOnly
loginTimeout=60
Prevents premature Talend Job failures during transient failover or brief network interruptions.
Before promoting any changes to the Production environment, it is essential to perform failover and reconnection stress tests in the DEV/QA environment. This will help to validate the behavior of Talend Job during:
Talend JDBC connection parameters | Qlik Talend Help Center
Microsoft JDBC driver support for Always On / HA-DR | learn.microsoft.com
SQL Server JDBC connection properties | learn.microsoft.com
This article explains how to extract changes from a Change Store by using the Qlik Cloud Services connector in Qlik Automate and how to sync them to an Excel file.
While the example uses a Microsoft Excel file, it can easily be modified to create a CSV as well.
The article also includes:
Content
You will need the following:
Week start is included in the primary key because the purchasing process (making the changes) happens on a weekly basis.
Product Name is included in the primary key to make sure it is always returned when retrieving changes through the Get Current Changes From Change Store block in Qlik Automate.
Below is an example of the table in an app:
Optionally, you can use the app that is included in this article. Follow these steps to install the app and configure the Write Table:
Set the third one (Value) to the destinationFileName (E) variable.
Operator: equals
Search for the Right trim formula.
Configure the Character to trim parameter to a single comma.
Type a single square bracket after the field mapping in the Rows input field:
The automation is now configured and can be run manually. But ideally, a user can run it from within the Qlik Sense app whenever they are finished with creating orders through the Write Table.
This article will only cover the button’s configuration in a sheet. A step-by-step guide on configuring the button object to run automations is available in How to run an automation with custom parameters through the Qlik Sense button.
The Copy File block will fail when there already exists an Excel file with the same name. Depending on the use case, that might be the right behavior or you might want to overwrite the file.
The overwrite process explained below will delete the existing file and then create a new file.
Add a Condition block to the automation and configure it to evaluate the output from the Check If File Exists block.
This block will return a Boolean (true or false) result. If it is true, the file exists.
Configure the Condition block to evaluate that output using the Boolean 'is true' operator:
Qlik Automate can also be used to share the purchase order with your purchasing team. This can be built in the same automation or in a separate automation. Below are the steps to add this to the same automation.
Tip! Update the button label to make it clear to users of your app that clicking it will also send the purchase order.
As an alternative, it is also possible to add these blocks to a new automation that is triggered from a second button.
Is it possible to use different IdPs in a Qlik Cloud multi-tenant deployment?
Qlik Cloud is designed to support a single interactive Identity Provider (IdP) per tenant. For details, see Why Qlik doesn't support multiple interactive identity providers on a Qlik Cloud tenant. Identity Fedration can be used to link user identities across IdPs. See Using Multiple concurrent Identity Providers with Qlik Cloud.
A multi-tenant Qlik Cloud deployment allows for additional flexibility. Specific IdPs can be assigned to different tenants. For example, two tenants could make use of two separate IdPs.
The following limitations and risks apply:
This article explains how to extract changes from a Change Store and store them in a QVD by using a load script in Qlik Analytics.
The article also includes
This example will create an analytics app for Vendor Reviews. The idea is that you, as a company, are working with multiple vendors. Once a quarter, you want to review these vendors.
The example is simplified, but it can be extended with additional data for real-world examples or for other “review” use cases like employee reviews, budget reviews, and so on.
The app’s data model is a single table “Vendors” that contains a Vendor ID, Vendor Name, and City:
Vendors:
Load * inline [
"Vendor ID","Vendor Name","City"
1,Dunder Mifflin,Ghent
2,Nuka Cola,Leuven
3,Octan, Brussels
4,Kitchen Table International,Antwerp
];
The Write Table contains two data model fields: Vendor ID and Vendor Name. They are both configured as primary keys to demonstrate how this can work for composite keys.
The Write Table is then extended with three editable columns:
This article explains how to extract changes from a Change Store by using the Qlik Cloud Services connector in Qlik Automate and how to sync them to a database.
The example will use a MySQL database, but can easily be modified to use other database connectors supported in Qlik Automate, such as MSSQL, Postgres, AWS DynamoDB, AWS Redshift, Google BigQuery, Snowflake.
The article also includes:
Content
Here is an example of an empty database table for a change store with:
Run the automation manually by clicking the Run button in the automation editor and review that you have records showing in the MySQL table:
Currently, there is no incremental version yet for the Get Change Store History block. While this is on our roadmap, the automation from this article can be extended to do incremental loads, by first retrieving the highest updatedAt value from the MySQL table. The below steps explain how the automation can be extended:
SELECT MAX(updatedAT) FROM <your database table>
The solution documented in the previous section will execute the Upsert Record block once for each cell with changes in the change store. This may create too much traffic for some use cases. To address this, the automation can be extended to support bulk operations and insert multiple records in a single database operation.
The approach is to transform the output of the List Change Store History block from a nested list of changes into a list of records that contains the changes grouped by primary key, userId, and updatedAt timestamp.
See the attached automation example: Automation Example to Bulk Extract Change Store History to MySQL Incremental.json.
The provided automations will require additional configuration after being imported, such as changing the store, database, and primary key setup.
Automation Example to Extract Change Store History to MySQL Incremental.json
Automation Example to Bulk Extract Change Store History to MySQL Incremental.json
If field names in the change store don't match the database (or another destination), the Replace Field Names In List block can be used to translate the field names from one system to another.
To add a more readable parameter to track the user who made changes, the Get User block from the Qlik Cloud Services connector can be used to map User IDs into email addresses or names.
A user's name might not be sufficient as a unique identifier. Instead, combine it with a user ID or user email.
Add a button chart object to the sheet that contains the Write Table, allowing users to start the automation from within the Qlik app. See How to run an automation with custom parameters through the Qlik Sense button for more information.
Environment
A Job design is shown below, using a tSetKeystore component to set the keystore file in the preJob, followed by using a tMysqlConnection to establish a MYSQL connection. However, MYSQL fails to connect.
Nevertheless, by changing the order of the components as demonstrated below, the MYSQL connection is successful.
To address this issue, you can choose from the following solutions without altering the order of the tSetKeyStore and tMysqlConnection components.
tSetKeyStore sets values for javax.net.ssl properties, thereby affecting the subsequent components. Most recent MySQL versions use SSL connections by default. Since the Java SSL environment has been modified, the MySQL JDBC driver inherits these changes from tSetKeyStore, which can potentially impact the connection.
A Job design is presented below:
tSetKeystore: set the Kafka truststore file.
tKafkaConnection, tKafkaInput: connect with Kafka Cluster as a Consumer and transmits messages.
However, while running the Job, an error exception occurred under the tKafkaInput component.
org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
Make sure to execute the tSetKeyStore component prior to the Kafka components to enable the Job to locate the certificates required for the Kafka connection. To achieve this, connect the tSetKeystore component to tkafkaConnection using an OnSubjobOK link, as demonstrated below:
For more detailed information on trigger connectors, specifically OnSubjobOK and OnComponentOK, please refer to this KB article: What is the difference between OnSubjobOK and OnComponentOK?.
This article explains whether changing integration credentials or the host address for a database integration requires an integration reset in Stitch. It will also address key differences between key-based incremental replication and log-based incremental replication.
Updating credentials (e.g., username or password) does not require an integration reset. Stitch will continue replicating data from the last saved bookmark values for your tables according to the configured replication method.
Changing the host address is more nuanced and depends on the replication method:
Important:
If the database name changes, Stitch treats it as a new database:
| Change Type | Key-Based Replication | Log-Based Replication |
| Credentials | No reset required | No reset required |
| Host Address | No reset (if search path unchanged) | Reset required |
| Database Name | Reset required | Reset required |
This article aims to help you facilitate the most effective way to collaborate with Qlik Support. It defines your responsibilities as a partner, including reproducing issues, performing basic troubleshooting, and consulting the knowledge base or official documentation.
Before contacting Qlik Talend Technical Support, partners must complete the steps outlined in Qlik Responsible Partner Duties and should review the OEM/MSP Support Policy to understand the scope of support and the expectations on Partners.
Content
Identify which Qlik product, source endpoint, target endpoint, environment, or system layer is experiencing the issue.
For example, if a task fails in Qlik Replicate on Windows, check whether the issue occurs in a single task or across all tasks, whether it happens during full load or CDC, and whether it is related to a specific table or the data itself.
Similarly, if the issue occurs in only one environment (e.g., Production), ask the customer to confirm whether it can be reproduced in a test environment, or test in your own environment to determine if the issue is environment-related.
Always include the exact product name, version, source endpoint, target endpoint the customer is using.
Many issues are version-related, endpoint-related and Support cannot accurately investigate the issue without this information.
If the product the customer is using has reached End of life (EOL), please plan the upgrade. If the issue can be reproducible on the latest version, please reach out to us so that we can investigate and determine whether it's a defect or working as designed.
For End of Life or End of Support information, see Product Lifecycle
Partners are expected to recreate the customer’s environment (matching versions, configurations, and other relevant details) and attempt to reproduce the issue.
If you do not already have a test environment, please ensure one is set up. Having your own environment is essential for reproducing issues and confirming whether the same behavior occurs outside of the customer’s setup.
In some cases, it may also be helpful to test in a clean environment to rule out local configuration issues.
If the issue does not occur in the newer version or a clean setup, it may have already been resolved, and you can propose an upgrade as a solution.
See the Release Note for the resolved issues.
Regardless of whether the issue could be reproduced, please include:
While pasting a portion of the log into the case comment can help highlight the main error, it is still required to attach the Diagnostic Package with the entire original log file (using, for example, FileCloud).
Support requires the full verbose logs and task settings to understand the overall context and verify that the partial information provided is accurate and complete.
It is difficult to verify the root cause or provide reliable guidance without full verbose logs.
Additionally:
Please do not simply forward or copy and paste the customer’s inquiry.
As a responsible partner, you are expected to perform an initial investigation. In your case submission, clearly describe:
Sharing this thought process:
Even if the issue is still unresolved, outlining what you have already tried helps Support address it more quickly and effectively.
Attach all relevant files you have received from the customer and personally reviewed during your investigation, as well as all relevant files you have used when reproducing the steps.
Providing both the customer’s files and your reproduction files enables Support to verify whether the same issue occurs under the same conditions and to determine if the problem is reproducible, environment-specific, or specific to the customer’s configuration.
This includes (but is not limited to):
All support cases must be submitted using your official partner account, not the customer's account.
If you do not yet have a partner's account, contact Qlik Customer Support to request access and to receive the appropriate onboarding.
Review the support policy and set the case severity properly. See Qlik Support Policy and SLAs.
This template provides guidance on what to include and how to structure your case.
What happened? When did it happen? Where did it occur?
Clearly describe the issue, including:
Specify if applicable:
Find your Qlik Cloud Subscription ID and Tenant Hostname and ID
Only include what is needed based on the case type.
List the files you’ve included in the case and provide a brief description of each.
Summary of your Investigation
Explain what steps you took to investigate the issue before contacting Support.
Examples:
Thank you! We appreciate your cooperation in following these guidelines.
This ensures that your cases can be handled efficiently and escalated quickly when necessary.
This error occurs with the Google Cloud SQL PostgreSQL database integration and it displays as below in the extraction logs:
Fatal Error Occured - ERROR: temporary file size exceeds temp_file_limit
To resolve this issue, you need to increase the temp_file_limit parameter in your PostgreSQL configuration
Here are the steps to fix it:
Access your Google Cloud SQL instance settings.
Locate the database flags or parameters section.
Find the temp_file_limit flag and increase its value.
The value is specified in kilobytes (kB).
The default in PostgreSQL is -1, which means no limit. However, Cloud SQL may enforce a smaller custom value depending on your instance configuration.
If you’re unsure about the appropriate value, start by doubling the current limit and adjust as needed based on your workload. Increasing this limit allows larger queries to complete but may also increase storage usage, so monitor performance and disk space after making the change.
Save the changes.
Updating database flags in Cloud SQL typically requires a restart of the instance for the new settings to take effect.
After modifying the temp_file_limit, restart your PostgreSQL instance (if required) and run an extraction in Stitch.
The error message indicates that the temporary file size has exceeded the temp_file_limit in your Google Cloud SQL PostgreSQL database. This limit is set to control the maximum size of temporary files used during query execution.
See Google’s documentation on configuring database flags here:
Configure database flags | cloud.google.com
To investigate Task failure, It is necessary to collect the Diagnostics Package from Qlik Cloud Data Integration.
Option Two: Monitor view within the task
Often, Support will request that specific logging components be increased to Verbose or Trace in order to effectively troubleshoot. To modify, click on the "Logging options" located in the right-hand corner of the logs view. The options presented in the UI do not use the same terminology as what you see in the logs themselves. For better understanding, please refer to this mapping:
| UI | Logs |
| Source - full load | SOURCE_UNLOAD |
| Source - CDC | SOURCE_CAPTURE |
| Source - data | SOURCE_UNLOAD SOURCE_CAPTURE SOURCE_LOG_DUMP DATA_RECORD |
| Target - full load | TARGET_LOAD |
| Target - CDC | TARGET_APPLY |
| Target - Upload | FILE_FACTORY |
| Extended CDC | SORTER SORTER_STORAGE |
| Performance | PERFORMANCE |
| Metadata | SERVER TABLES_MANAGER METADATA_MANAGER METADATA_CHANGES |
| Infrastructure | IO INFRASTRUCTURE STREAM STREAM_COMPONENT TASK_MANAGER |
| Transformation | TRANSFORMATION |
Please note that if the View task logs option is not present in the dropdown menu, it indicates that the type of task you are working with does not have available task logs. In the current design, only Replication and Landing tasks have task logs.
Qlik Automate is a no-code automation and integration platform that lets you visually create automated workflows. It allows you to connect Qlik capabilities with other systems without writing code. Powered by Qlik Talend Cloud APIs, Qlik Automate enables users to create powerful automation workflows for their data pipelines.
Learn more about Qlik Automate.
In this article, you will learn how to set up Qlik Automate to deploy a Qlik Talend Cloud pipeline project across spaces or tenants.
To ease your implementation, there is a template on Qlik Automate that you can customize to fit your needs.
You will find it in the template picker: navigate to Add new → New automation → Search templates and search for ‘Deploying a Data Integration pipeline project from development to production' in the search bar, and click Use template.
ℹ️ This template wil be generally available on October 1, 2025.
In this deployment use case, the development team made changes to an existing Qlik Talend Cloud (QTC) pipeline.
As the deployment owner, you will redeploy the updated pipeline project from a development space to a production space where an existing pipeline is already running.
To reproduce this workflow, you'll first need to create:
Using separate spaces and databases ensures a clear separation of concerns and responsibilities in an organization, reduces the risk to production pipelines while the development team is working on feature changes.
Workflow steps:
ℹ️ Note: This is a re-deployment workflow. For initial deployments, create a new project prior to proceeding with the import.
Use the 'Export Project' block to call the corresponding API, using the ProjectID.
This will download your DEV project as a ZIP file. In Qlik Automate, you can use various cloud storage options, e.g. OneDrive. Configure the OneDrive 'Copy File on Microsoft OneDrive' block to store it at the desired location.
To avoid duplicate file names (which may casue the automation to fail) and to easily differentiate your project exports, use the 'Variable' block to define a unique prefix (such as dateTime).
From the 'Qlik Talend Data Integration' connector, use the 'Get Project Binding' block to call the API endpoint.
The 'bindings' are project variables that are tied to the project and can be customized for reuse in another project. Once you test-run it, store the text response for later use from the 'History' tab in the block configuration pane on the right side of the automation canvas:
We will now use the 'bindings' from the previous step as a template to adjust the values for your PROD pipeline project, before proceeding with the import.
From the automation, use the 'Update Project Bindings' block. Copy the response from the 'Get Project Binding' block into the text editor and update the DEV values with the appropriate PROD variables (such as the source and target databases). Then, paste the updated text into the Variables input parameter of the 'Update Project Binding' block.
ℹ️ Note: these project variables are not applied dynamically when you run the 'Update Bindings' using the Qlik Automate block. They are appended and only take effect when you import the project.
For a Change Data Capture (CDC) project, you must stop the project before proceeding with the import.
Use the 'Stop Data Task' block from the 'Qlik Talend Data Integration' connector. You will find the connectors in the Block Library pane on the left side of the automation canvas.
Fill in the ProjectID and TaskID:
ℹ️ We recommend using a logic with variables to handle task stopping in the automation. Please refer to the template configuration and customize it to your needs.
You’re now ready to import the DEV project contents into the existing PROD project.
⚠️ Warning: Importing the new project will overwrite any existing content in the PROD project.
Using the OneDrive block and the 'Import Project' blocks, we will import the previously saved ZIP file.
ℹ️ In this template, the project ID is handled dynamically using the variable block. Review and customize this built-in logic to match your environment and requirements.
After this step is completed, your project is now deployed to production.
It is necessary to prepare your project before restarting it in production. Preparing ensures it’s ready to be run by creating or recreating the required artifacts (such as tables, etc).
The 'Prepare Project' block uses the ProjectID to prepare the project tasks by using the built-in project logic. You can also specify one or more specific tasks to prepare using the 'Data Task ID' field. In our example, we are reusing the previously set variable to prepare the same PROD project we just imported.
If your pipeline is damaged, and you need to recreate artifacts from scratch, enable the 'Allow recreate' option. Caution: this may result in data loss.
Triggering a 'Prepare' results in a new 'actionID'. This ID is used to query the action status via the 'Get Action Status' API block in Qlik Automate. We use an API polling strategy to check the status at a preset frequency.
Depending on the number of tables, the preparation can take up to several minutes.
Once we get the confirmation that the preparation action was 'COMPLETED', we can move on with restarting the project tasks.
If the preparation fails, you can define an adequate course of action, such as creating a ServiceNow ticket or sending a message on a Teams channel.
ℹ️ Tip: Review the template's conditional blocks configuration to handle different preparation statuses and customize the logic to fit your needs.
Now that your project is successfully prepared, you can restart it in production.
In this workflow, we use the 'List Data Tasks' to filter on 'landing' and 'storage' for the production project, and restart these tasks automatically.
All done: your production pipeline has been updated, prepared, and restarted automatically!
Now it’s your turn: fetch the Qlik Automate template from the template library and start automating your pipeline deployments.
Start a Qlik Talend Cloud® trial
How to get started with the Qlik Talend Data Integration blocks in Qlik Automate