Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Qlik offers a wide range of channels to assist you in troubleshooting, answering frequently asked questions, and getting in touch with our technical experts. In this article, we guide you through all available avenues to secure your best possible experience.
For details on our terms and conditions, review the Qlik Support Policy.
Index:
We're happy to help! Here's a breakdown of resources for each type of need.
Support | Professional Services (*) | |
Reactively fixes technical issues as well as answers narrowly defined specific questions. Handles administrative issues to keep the product up-to-date and functioning. | Proactively accelerates projects, reduces risk, and achieves optimal configurations. Delivers expert help for training, planning, implementation, and performance improvement. | |
|
|
(*) reach out to your Account Manager or Customer Success Manager
Your first line of support: https://community.qlik.com/
Looking for content? Type your question into our global search bar:
Leverage the enhanced and continuously updated Knowledge Base to find solutions to your questions and best practice guides. Bookmark this page for quick access!
Subscribe to maximize your Qlik experience!
The Support Updates Blog
The Support Updates blog delivers important and useful Qlik Support information about end-of-product support, new service releases, and general support topics. (click)
The Qlik Design Blog
The Design blog is all about product and Qlik solutions, such as scripting, data modelling, visual design, extensions, best practices, and more! (click)
The Product Innovation Blog
By reading the Product Innovation blog, you will learn about what's new across all of the products in our growing Qlik product portfolio. (click)
Q&A with Qlik
Live sessions with Qlik Experts in which we focus on your questions.
Techspert Talks
Techspert Talks is a free webinar to facilitate knowledge sharing held on a monthly basis.
Technical Adoption Workshops
Our in depth, hands-on workshops allow new Qlik Cloud Admins to build alongside Qlik Experts.
Qlik Fix
Qlik Fix is a series of short video with helpful solutions for Qlik customers and partners.
Suggest an idea, and influence the next generation of Qlik features!
Search & Submit Ideas
Ideation Guidelines
Get the full value of the community.
Register a Qlik ID:
Incidents are supported through our Chat, by clicking Chat Now on any Support Page across Qlik Community.
To raise a new issue, all you need to do is chat with us. With this, we can:
Log in to manage and track your active cases in the Case Portal. (click)
Please note: to create a new case, it is easiest to do so via our chat (see above). Our chat will log your case through a series of guided intake questions.
When creating a case, you will be prompted to enter problem type and issue level. Definitions shared below:
Select Account Related for issues with your account, licenses, downloads, or payment.
Select Product Related for technical issues with Qlik products and platforms.
If your issue is account related, you will be asked to select a Priority level:
Select Medium/Low if the system is accessible, but there are some functional limitations that are not critical in the daily operation.
Select High if there are significant impacts on normal work or performance.
Select Urgent if there are major impacts on business-critical work or performance.
If your issue is product related, you will be asked to select a Severity level:
Severity 1: Qlik production software is down or not available, but not because of scheduled maintenance and/or upgrades.
Severity 2: Major functionality is not working in accordance with the technical specifications in documentation or significant performance degradation is experienced so that critical business operations cannot be performed.
Severity 3: Any error that is not Severity 1 Error or Severity 2 Issue. For more information, visit our Qlik Support Policy.
If you require a support case escalation, you have two options:
When other Support Channels are down for maintenance, please contact us via phone for high severity production-down concerns.
A collection of useful links.
Qlik Cloud Status Page
Keep up to date with Qlik Cloud's status.
Support Policy
Review our Service Level Agreements and License Agreements.
Live Chat and Case Portal
Your one stop to contact us.
For Qlik Talend On-premises Solution, when using global context values in all jobs, If you want to change the file path values to replace d:/ with t:/, you can propagate the change to all jobs in the Studio.
How about Qlik Talend Cloud and Hybrid Solutions? This article provides a brief introduction to changing the value of context parameters and propagating the change to all Tasks in Talend Cloud.
You can use this API to update artifact parameters
Put https://api.eu.cloud.talend.com/orchestration/executables/tasks/{taskid}
In the body, set the paramters you want to update:
Example: change the ContextFilePath": from "C:/Franking/in.csv" to "h:/Franking/in.csv")
{
"name": "contextpath",
"description": "CC",
"workspaceId": "61167bef18d7d656bfae071d",
"artifact": {
"id": "689b5d04febbe74489779c31",
"version": "0.1.0.20251208032554"
},
"parameters":{
"ContextFilePath": "h:/Franking/in.csv"
}
}
To maintain context values better in the long run for multiple jobs, create a Connection in Talend Cloud. This way, all context in the job can be updated by updating the file name in the Qlik Talend Management Console Connection as opposed to running the API multiple times. For information on how to set up a connection, see Managing connections.
Microsoft will deprecate Change Data Capture (CDC) components by Attunity. See SQL Server Integration Services (SSIS) Change Data Capture Attunity feature deprecations | microsoft.com for details.
Will this affect Qlik Replicate?
This announcement does not affect Qlik Replicate. It is only relevant to the product "Change Data Capture (CDC) components by Attunity".
Microsoft distributes and provides primary support for this product. Qlik Replicate's functionality will remain the same.
If using the tap-shopify connector in Stitch for the orders stream, you may encounter the following error:
"An error occurred with the GraphQL API"
To resolve the error:
Why This Happens
This is a Shopify platform constraint, not a Stitch limitation.
The Shopify Bulk API has an important restriction that only one bulk operation per type (e.g., orders) can run at a time per shop.
This means: If a bulk job is already running, any new request of the same type will fail. Overlapping jobs from multiple Stitch connections or other platforms can trigger this error.
When You Might See This Error
You may encounter this message if:
Shopify Bulk API Documentations:
https://shopify.dev/docs/api/usage/bulk-operations/queries#limitations
Bulk operations with the GraphQL Admin API
When installed in Linux deployment server, those two folders are allocating a lot of space in - /opt partition.
- /opt/TalendRemoteEngine
- /opt/TalendRuntime-8.0.1-R2025-02-RT
How to purge and keep under control the space allocated in - /opt partition?
Cleaning Talend Remote Engine (RE) Cache and Logs
Cleaning Up KAR Files from Talend Runtime Server
Please follow the steps in this article to remove or clean up KAR files:
How to Remove / Clean Up KAR Files from Talend Runtime Server
For automated clean-up, refer to the following documentation: Understanding the Talend Remote Engine Clean-up Cycle
After activating Stitch with SSO login using Azure, current users can log in without issue. However, new users added to the Azure AD cannot see Stitch.
A new user must log in through the My Apps Portal:
Trigger provisioning in Stitch and automatically create the user in Stitch (if configured correctly).
Other Identity Providers (Idps) function similarly. These are their respective URLs:
Stitch uses Just-in-Time provisioning.
This means that: A user is not created in Stitch automatically when added to Azure AD. Instead, a user is only provisioned in Stitch when they log in for the first time via SSO.
In Talend Studio 8.0 (version R2025‑05), tDataQualityRules component fails to refresh data quality rules defined in Talend Cloud Data Stewardship. When attempting to refresh, the UI remains unresponsive and does not load updated rules.
The Studio logs record an Unhandled event loop exception and a 400 Bad Request HTTP error when retrieving a rule artifact artifact .jar file from the rule repository.
Caused by: org.springframework.web.client.HttpClientErrorException$BadRequest: 400 Bad Request
<Resource>/eu-central-1-minio-production/repositories/<UUID>/rules/1753780XXXXX/rules-1753780XXXXX.jar</Resource>
<Code>InvalidRequest</Code>
Please update rule to the latest version in all sub-jobs and use TDC to download jar files.
In the URL field of tDataQualityRules component basic setting, updating it to https://tdc.eu.cloud.talend.com
The error is caused by a malformed or stale artifact path to repository where Talend Studio is attempting to fetch a rule artifact .jar file from.
It is a known issue that cannot connect to TDS Gateway https://tds.us.cloud.talend.com/ to download jars
troubleshooting-tdataqualityrules-with-tds
To resolve login issues with your Qlik Stitch account:
Should this not resolve the issue, please do not hesitate to contact our Support team.
To prevent account lockout, refrain from submitting multiple password reset requests within a single day.
Sometimes, for the database integration, like PostgreSQL, MongoDB, etc., you may encounter the following error.
2021-03-03 15:55:20,221Z main - INFO Exit status is: Discovery succeeded. Tap failed with code -9. Target succeeded.
To help with the above scenario, we have two non-customer-facing settings we can help to set. Those are the Incremental limit and the Itersize values.
Incremental Limit – Chunks full table and incremental queries into multiple explicit queries by adding a LIMIT X clause to each. This would reduce the load on the DB to prepare result sets if that’s an issue. As mentioned above, by default, our full table and incremental queries do not issue a limit. By adding an incremental limit, we are now forcing the integration to request "batches" of x number of data. This is in hopes that it can alleviate the burden on the source and allow for the query to complete.
Itersize – This adjusts the number of rows fetched per round trip of a streaming result set for the queries themselves (defaults to 20k rows at a time). It controls the maximum rows fetched in a batch.
If you face this issue, please reach out to us, and we can set up alleviation for you.
How-to-contact-Qlik-Support
Generally, these -9 errors indicate that the tap is running out of memory when trying to perform an extraction. By default, our full table and incremental queries do not issue a limit. As a result, we wait for the query to complete on your server and in some instances, if the resulting data is quite large, it can terminate, resulting in a connection close error or something similar, like a memory error.
This article describes how to resolve the NPrinting connection verification error:
x Qlik NPrinting webrenderer can reach Qlik Sense hub error
This article aims to help you facilitate the most effective way to collaborate with Qlik Support. It defines your responsibilities as a partner, including reproducing issues, performing basic troubleshooting, and consulting the knowledge base or official documentation.
For the Qlik Talend guide, see Partner Guide: How to Prepare and Collaborate with Qlik Talend Technical Support.
Before contacting Qlik Data Analytics Technical Support, partners must complete the steps outlined in Qlik Responsible Partner Duties and should review the OEM/MSP Support Policy to understand the scope of support and the expectations on Partners.
Content
Identify which Qlik product, environment, product configuration, or system layer is experiencing the issue.
For example, if a task fails in Qlik Sense Enterprise on Windows, try running it directly in Qlik Sense Management Console to determine whether the problem is transient. Also, try running a different task to determine whether the problem is task-specific.
Similarly, if a reload fails in one environment (e.g., Production), inform the customer to try running it in another (e.g., Test) to confirm whether the issue is environment-specific.
Always include the exact product name, version, and patch (or SR) the customer is using.
Many issues are version-specific, and Support cannot accurately investigate the issue without this information.
If the product the customer is using has reached End of life (EOL), please plan the upgrade. If the issue can be reproducible on the latest version, please reach out to us so that we can investigate and determine whether it's a defect or working as designed.
For End of Life or End of Support information, see Product Lifecycle.
Partners are expected to recreate the customer’s environment (matching versions, configurations, and other relevant details) and attempt to reproduce the issue.
If you do not already have a test environment, please ensure one is set up. Having your own environment is essential for reproducing issues and confirming whether the same behavior occurs outside of the customer’s setup.
In addition, please test whether the issue occurs in the latest supported product version. In some cases, it may also be helpful to test in a clean environment to rule out local configuration issues. If the issue does not occur in the newer version or a clean setup, it may have already been resolved, and you can propose an upgrade as a solution.
See the Release Note for the resolved issues.
Regardless of whether the issue could be reproduced, please include:
While pasting a portion of the log into the case comment can help highlight the main error, it is still required to attach the entire original log file (using, for example, FileCloud).
Support needs the full logs to understand the broader context and to confirm that the partial information is accurate and complete.
It is difficult to verify the root or provide reliable guidance without full logs.
Additionally:
Please do not simply forward or copy and paste the customer’s inquiry.
As a responsible partner, you are expected to perform an initial investigation. In your case submission, clearly describe:
Sharing this thought process:
Even if the issue remains unresolved, outlining what you already tried helps Support move forward faster and more effectively.
Attach all relevant files you received from the customer and personally reviewed during your investigation, as well as all relevant files you have used when reproducing the steps.
Providing both the customer’s files and your reproduction files allows Support to verify whether the same issue occurred under the same conditions, and to determine if the problem is reproducible, environment-specific, or isolated to the customer's configuration.
This includes (but is not limited to):
All support cases must be submitted using your official partner account, not the customer's account.
If you do not yet have a partner's account, contact Qlik Customer Support to request access and to receive the appropriate onboarding.
Review the support policy and set the case severity properly. See
Qlik Support Policy and SLAs
This template helps guide you on what to include and how to structure your case.
What happened? When did it happen? Where did it happen?
Clearly describe the event, including:
Only include what is needed based on the case type.
Examples:
List the files you have included in the case and what each one is.
Explain what investigation was done before contacting support.
Examples:
Thank you! We appreciate your cooperation in following these guidelines.
This ensures that your cases can be handled efficiently and escalated quickly when necessary.
Question I
Which Apache Tomcat Versions are affected by Apache Tomcat Vulnerability CVE-2025-24813 and the impact?
Apache Tomcat Vulnerability CVE-2025-24813 is Remote Code Execution and/or Information disclosure and/or malicious content added to uploaded files via write enabled Default Servlet
Affected Apache Tomcat version
How Impact
The original implementation of partial PUT used a temporary file based on the user provided file name and path with the path separator replaced by ".".
If all of the following were true, a malicious user was able to view security sensitive files and/or inject content into those files:
If all of the following were true, a malicious user was able to perform remote code execution:
Question II
On which Apache Tomcat versions this Vulnerability is fixed and currently Talend Supported Apache Tomcat versions?
Vulnerability is fixed with the following Apache Tomcat versions
Talend Supported Apache Tomcat Versions
Regarding of Talend Help Documentation: compatible-web-application-servers
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2025-24813
Apache Tomcat Vulnerabilities
Question
After upgrading Talend Administration Center (TAC) to TPS-5612 (R2024-12) or later, why do the four MetaServlet commands createBranch, branchExist, createTag and deleteBranch not work as before and throw the error like below?
{"error":"Unknown command 'branchExist'","returnCode":2}
With the release of Talend Administration Center patch TPS-5612, project references and Git access have been removed from Talend Administration Center to improve the performance.
The following elements have been removed:
This is documented in the change notes here: https://help.qlik.com/talend/en-US/release-notes/8.0/r2024-12-administration-center
In Job Conductor, Job log file cleaner is no longer occuring and resulting in files piling up and filling-up filesystem when installing Talend V8 R2025-01, executionLogs and generatedJobs are not cleaned as expected
Apply TAC R2025-04 and later patch to solve this issue.
It is a known jira issue that File cleaner doesn`t work as expected due to regexp matcher to escape file
Jira Issue: QTAC-966
Sometimes we need to store column's before-image data in the target table. This is useful if we want to store both of the before-image and after-image of the columns values in the target table for downstream apps usage.
Under Apply Changes mode (Store changes mode is turn off), in the table setting by adding a new column in transformation (name it as "prevenient_salary" in this sample), the variable expression is like $BI__<columnName> where $BI__ is a mandatory prefix (which instructs Replicate to capture the before-image data) and <columnName> is the original table column. For example if the original table column name is SALARY then $BI__SALARY is the column before-image data:
If the column SALARY value is updated from 22 to 33 in source side, then before the UPDATE the target table row looks like:
after the UPDATE is applied to target table the row looks like:
In this sample the before-image value is 22, the after-image value is 33.
before-image data can be used in filer also, see sample here .
Please notice that you are unable to identify deleted records by using the field "IsDeleted" from Stitch's Hubspot integration in your Data Warehouse recently, and how to track Deleted Records With Stitch's Hubspot integration in the future?
HubSpot does offer a webhook (outlined on this page: https://developers.hubspot.com/docs/guides/api/app-management/webhooks) that can be configured to persist data about deletions of source records.
One method you have found success using is to configure this webhook to direct data towards our Incoming Webhook integration (https://www.stitchdata.com/docs/integrations/webhooks/stitch-incoming-webhooks) so that the deletion data can then be used to account for deletes in your destination-side queries.
This wouldn't necessarily help for records that have already been deleted, but could be helpful as a long-term solution moving forward.
The field "IsDeleted" is offered by the integration, but is not functional in identifying deleted records as stated by Hubspot's API documentation.
"HubSpot's API does include an isdeleted field, though these fields are non-functional in practice. For example, HubSpot notes on this page of their documentation (https://developers.hubspot.com/docs/guides/api/crm/objects/deals) that the isDeleted field of the Deals object, "In practice...will always be false as deleted records will not appear in the API."
Since HubSpot's API will not return deleted records, Qlik Stitch integration will not be able to detect those records for replication in order to track the state of the "IsDeleted" field. We've since removed these fields from the most recent version of our HubSpot integration to reduce confusion.
Qlik Stitch will error out with the following after changing Snowflake authentication from single factor password to key-pair in destination
Cannot perform CREATE SCHEMA. This session does not have a current database. Call 'USE DATABASE', or use a qualified name.
Please try these two steps below
-----BEGIN RSA PRIVATE KEY----- ... -----END RSA PRIVATE KEY-----
ALTER USER SET DEFAULT_ROLE STITCH
This issue is probably caused by a lack of permissions.
https://docs.snowflake.com/en/sql-reference/sql/alter-user
Question
Is it possible to use deprecated versions of integrations in Qlik Stitch?
You can use deprecated and sunset versions of integrations in Stitch, however, which are not eligible for Qlik Stitch Support.
If there is a breaking change made from the integration source, Qlik Stitch Engineers will not able to check the code base to investigate and you will need to upgrade to the latest version of the integration.
Please note that there are many differences between the version you are currently on and the latest version. It is highly recommended to review Qlik Stitch's changelog to see what has been changed and enhanced.
https://www.stitchdata.com/docs/changelog/
Question
How to identify deleted records in Stitch's Shopify integration?
This is unfortunately not possible so far, since Shopify does not offer a deletion identifier in API to denote deleted records.
You may be encountering an error when attempting to authorize a Pipedrive integration:
"HTTP-error-code: 401, Error: unauthorized access".
You should verify the authorization of the credentials and possibly check with the Pipedrive team as well to confirm the access.
This error suggests that the credentials provided were not authorized to access the object as expected.