Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Qlik offers a wide range of channels to assist you in troubleshooting, answering frequently asked questions, and getting in touch with our technical experts. In this article, we guide you through all available avenues to secure your best possible experience.
For details on our terms and conditions, review the Qlik Support Policy.
Index:
We're happy to help! Here's a breakdown of resources for each type of need.
| Support | Professional Services (*) | |
| Reactively fixes technical issues as well as answers narrowly defined specific questions. Handles administrative issues to keep the product up-to-date and functioning. | Proactively accelerates projects, reduces risk, and achieves optimal configurations. Delivers expert help for training, planning, implementation, and performance improvement. | |
|
|
(*) reach out to your Account Manager or Customer Success Manager
Your first line of support: https://community.qlik.com/
Looking for content? Type your question into our global search bar:
Leverage the enhanced and continuously updated Knowledge Base to find solutions to your questions and best practice guides. Bookmark this page for quick access!
Subscribe to maximize your Qlik experience!
The Support Updates Blog
The Support Updates blog delivers important and useful Qlik Support information about end-of-product support, new service releases, and general support topics. (click)
The Qlik Design Blog
The Design blog is all about product and Qlik solutions, such as scripting, data modelling, visual design, extensions, best practices, and more! (click)
The Product Innovation Blog
By reading the Product Innovation blog, you will learn about what's new across all of the products in our growing Qlik product portfolio. (click)
Q&A with Qlik
Live sessions with Qlik Experts in which we focus on your questions.
Techspert Talks
Techspert Talks is a free webinar to facilitate knowledge sharing held on a monthly basis.
Technical Adoption Workshops
Our in depth, hands-on workshops allow new Qlik Cloud Admins to build alongside Qlik Experts.
Qlik Fix
Qlik Fix is a series of short video with helpful solutions for Qlik customers and partners.
Suggest an idea, and influence the next generation of Qlik features!
Search & Submit Ideas
Ideation Guidelines
Get the full value of the community.
Register a Qlik ID:
Incidents are supported through our Chat, by clicking Chat Now on any Support Page across Qlik Community.
To raise a new issue, all you need to do is chat with us. With this, we can:
Log in to manage and track your active cases in the Case Portal. (click)
Please note: to create a new case, it is easiest to do so via our chat (see above). Our chat will log your case through a series of guided intake questions.
When creating a case, you will be prompted to enter problem type and issue level. Definitions shared below:
Select Account Related for issues with your account, licenses, downloads, or payment.
Select Product Related for technical issues with Qlik products and platforms.
If your issue is account related, you will be asked to select a Priority level:
Select Medium/Low if the system is accessible, but there are some functional limitations that are not critical in the daily operation.
Select High if there are significant impacts on normal work or performance.
Select Urgent if there are major impacts on business-critical work or performance.
If your issue is product related, you will be asked to select a Severity level:
Severity 1: Qlik production software is down or not available, but not because of scheduled maintenance and/or upgrades.
Severity 2: Major functionality is not working in accordance with the technical specifications in documentation or significant performance degradation is experienced so that critical business operations cannot be performed.
Severity 3: Any error that is not Severity 1 Error or Severity 2 Issue. For more information, visit our Qlik Support Policy.
If you require a support case escalation, you have two options:
When other Support Channels are down for maintenance, please contact us via phone for high severity production-down concerns.
A collection of useful links.
Qlik Cloud Status Page
Keep up to date with Qlik Cloud's status.
Support Policy
Review our Service Level Agreements and License Agreements.
Live Chat and Case Portal
Your one stop to contact us.
Recent versions of Qlik connectors have an out-of-the-box value of 255 for their DefaultStringColumnLength setting.
This means that, by default, any strings containing more than 255 characters is cut when imported from the database.
To import longer strings, specify a higher value for DefaultStringColumnLength.
This can be done in the connection definition and the Advanced Properties, as shown in the example below.
The maximum value that can be set is 2,147,483,647.
This article outlines how to handle DDL changes on a SQL Server table as part of the publication.
The steps in this article assume you use the task's default settings: full load and apply changes are enabled, full load is set to drop and recreate target tables, and DDL Handling Policy is set to apply alter statements to the target.
To achieve something simple, such as increasing the length of a column (without changing the data type), run an ALTER TABLE command on the source while the task is running, and it will be pushed to the target.
For example: alter table dbo.address alter column city varchar(70)
To make more complicated changes to the table, such as:
Follow this procedure:
When connecting to Microsoft OneDrive using either Qlik Cloud Analytics or Qlik Sense Enterprise on Windows, shared files and folders are no longer visible.
While the endpoint may intermittently work as expected, it is in a degraded state until November 2026. See drive: sharedWithMe (deprecated) | learn.microsoft.com. In most cases, the API endpoint is no longer accessible due to the publicly documented degraded state.
Qlik is actively reviewing the situation internally (SUPPORT-7182).
However, given that the MS API endpoint has been deprecated by Microsoft, a Qlik workaround or solution is not certain or guaranteed.
Use a different type of shared storage, such as mapped network drives, Dropbox, or SharePoint, to name a few.
Microsoft deprecated the /me/drive/sharedWithMe API endpoint.
SUPPORT-7182
To check the location where your tasks are running on, access your task and refer to the right-hand side where artifact details can be found under Configuration.
In this instance, the Binary type is displayed as "Talend Runtime" because it is a REST type artifact, indicating that it will be deployed and executed on Talend Runtime.
If you are using Remote Engine to execute your task, it will be displayed in the "Processor" section under Configuration. In this instance, it demonstrates that Remote Engine version 2.13.13 will be employed to run the task.
After upgrading from Talend 7.3 to Talend 8 (R2025-10), it includes the Camel 4 update, and a change in the cSetHeader component. In previous Talend 7.x, JSONPath evaluation generated a correctly formatted JSON header, however, in Talend 8, the same configuration produces a non-JSON representation where double quotes are stripped from the resulting object.
For example
Source Input
{ "META": {"Correlation_ID": "B","BGM_ACTION_CODE": “A”},
"STATUS": {"ADD_R_UPDATE_SUCCEEDED": “Y”},
"ERR": [ {"field": "ADD","keyword": "custOrgMustHaveAdd", "entity": “ORGANI” }]
}
cSetHeader Component Setting
| Name | "ERR" |
| Language | jsonPath |
| Value | "$.ERR[*]" |
Output in Talend 7 (Correct)
[{"field":"ADD","keyword":"custOrgMustHaveAdd","entity":"ORGANI"}]
cSetHeaderOutputTalend7
Output in Talend 8 (Invalid JSON format with out Double Quotes)
[{field=ADD, keyword=custOrgMustHaveAdd, entity=ORGANI}]
cSetHeaderOutputTalend8
Install the latest R2025-11 (or newer) update for both Talend Studio and Talend Runtime, where the corrected behavior has been implemented.
This behavior stems from changes in the underlying JSON processor, specifically, the shift from json-smart to JacksonJsonProvider, which modifies how JSONPath results are serialized.
The issue aligns with the upstream Camel defect jira issue reported here:
CAMEL-16389 | issues.apache.org
Extracting data from SAP BW InfoProvider / ASDO with two values in a WHERE clause returns 0 lines.
Example:
The following is a simple standard ADSO with two infoObjects (‘0BPARTNER’, ‘/BIC/TPGW8001’,) and one field (‘PARTNUM’). All are CHAR data type.
In the scripts we used [PARTNUM] = A, [PARTNUM] = B in the WHERE clause.
However
WHERE clause, it works as expected:From GTDIPCTS2Where ([PARTNUM] = A); [TPGW8001] instead of the field [PARTNUM] in the WHERE clause, it also works as expected:From GTDIPCTS2Where ([TPGW8001] = “1000”, [TPGW8001] = “1000”);
Upgrade to Direct Data Gateway version 1.7.8.
Defect SUPPORT-5101.
SUPPORT-5101
When using Google Cloud Pub/Sub as a target and configuring Data Message Publishing to Separate topic for each table, the Pub/Sub topic may be unexpectedly dropped if a DROP TABLE DDL is executed on the source. This occurs even if the Qlik Replicate task’s DDL Handling Policy When source table is dropped is set to Ignore DROP.
This issue has been fixed in the following builds:
To apply the fix, upgrade Qlik Replicate to one of the listed versions or any later release.
A product defect in versions earlier than 2025.5 SP3 causes the Pub/Sub topic to be dropped despite the DDL policy configuration.
Qlik Talend Administration Center shows Process Message Port WARNING and JobServer is not listening on port 8555 (default port).
ProcessMessagePortWarning
org.talend.remote.jobserver.server.TalendJobServer.ENABLED_PROCESS_MESSAGE= true
PROCESSMESSAGETRUE
ProcessMessagePortGreen
This issue is caused by that with Talend Administraion Center Patch 8.0.2.20250129_1317, the values for the parameter org.talend.remote.jobserver.server.TalendJobServer.ENABLED_PROCESS_MESSAGE is set as false in JobServer configuration file TalendJobServer.properties
PROCESSMESSAGEFalse
Qlik Talend Administration Center may report Jobserver with TPS-6012 installed as unavailable or misconfigured status.
MisconfiguredStatus
PatchVersion
In Error Log
Exception in thread "t_ForceUpdateCacheForServer_11" java.lang.NullPointerException: Cannot invoke "org.talend.utils.ProductVersion.compareTo(org.talend.utils.ProductVersion)" because "productVersion" is null
at org.talend.administrator.scheduler.business.wrapper.jobserver.JobServerWrapper.isSudo(JobServerWrapper.java:316)
BrandingProperties
patchName=Patch_20251028_TPS-6012_v1-8.0.1
to
patchName=8.0.1.Patch_20251028_TPS-6012_v1-8.0.1
The format of the version information provided by JobServer has changed.
SUPPORT-7132
This article explains how to extract changes from a Change Store by using the Qlik Cloud Services connector in Qlik Automate and how to sync them to a database.
The example will use a MySQL database, but can easily be modified to use other database connectors supported in Qlik Automate, such as MSSQL, Postgres, AWS DynamoDB, AWS Redshift, Google BigQuery, Snowflake.
The article also includes:
Content
Here is an example of an empty database table for a change store with:
Run the automation manually by clicking the Run button in the automation editor and review that you have records showing in the MySQL table:
Currently, there is no incremental version yet for the Get Change Store History block. While this is on our roadmap, the automation from this article can be extended to do incremental loads, by first retrieving the highest updatedAt value from the MySQL table. The below steps explain how the automation can be extended:
SELECT MAX(updatedAT) FROM <your database table>
The solution documented in the previous section will execute the Upsert Record block once for each cell with changes in the change store. This may create too much traffic for some use cases. To address this, the automation can be extended to support bulk operations and insert multiple records in a single database operation.
The approach is to transform the output of the List Change Store History block from a nested list of changes into a list of records that contains the changes grouped by primary key, userId, and updatedAt timestamp.
See the attached automation example: Automation Example to Bulk Extract Change Store History to MySQL Incremental.json.
The provided automations will require additional configuration after being imported, such as changing the store, database, and primary key setup.
Automation Example to Extract Change Store History to MySQL Incremental.json
Automation Example to Bulk Extract Change Store History to MySQL Incremental.json
If field names in the change store don't match the database (or another destination), the Replace Field Names In List block can be used to translate the field names from one system to another.
To add a more readable parameter to track the user who made changes, the Get User block from the Qlik Cloud Services connector can be used to map User IDs into email addresses or names.
A user's name might not be sufficient as a unique identifier. Instead, combine it with a user ID or user email.
Add a button chart object to the sheet that contains the Write Table, allowing users to start the automation from within the Qlik app. See How to run an automation with custom parameters through the Qlik Sense button for more information.
Environment
Usually, to generate a key file for key-pair authentication to the snowflake, it needs to use key pair authentication and key pair rotation.
For more information, please refer to documentation about: key-pair-auth | docs.snowflake.com
How to create a key file for Qlik Talend Data Catalog that use for key-pair authentication to the snowflake?
Since Qlik Talend Data Catalog only support PKCS#8 version 1 encryption with PBE-SHA1-3DES (-v1 option) for the moment, please use a sample command below to generate the keystore file via OpenSSL:
openssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -v1 PBE-SHA1-3DES -out rsa_key.p8
TALMM-6182
#Talend Data Catalog
It encounters the following loading error with Snowflake destination:
Insufficient privileges to operate on database '<DATABASE_NAME>'
Follow up documentation about connecting-a-snowflake-data-warehouse-to-stitch#create-database-and-use(Qlik Stitch Documentation) to create the database and user for Snowflake.
Further Troubleshooting
If you still see the error after following the setup guide, please try to use below steps
SHOW DATABASES LIKE '<DATABASE_NAME>';
SHOW GRANTS TO ROLE STITCH_ROLE;
ALTER USER STITCH_USER SET DEFAULT_ROLE = STITCH_ROLE;
If any required privileges are missing, grant them using an ACCOUNTADMIN role:
GRANT USAGE ON DATABASE <DATABASE_NAME> TO ROLE STITCH_ROLE;
GRANT CREATE SCHEMA ON DATABASE <DATABASE_NAME> TO ROLE STITCH_ROLE;
This error occurs because the Stitch Snowflake user does not have the required privileges on the target database.
Snowflake access control is role-based. Even if a database exists and is accessible under another role (e.g., ACCOUNTADMIN), the Stitch role must explicitly be granted privileges to use and create objects within that database.
When the Stitch role does not own the database and lacks privileges such as USAGE, CREATE SCHEMA, or CREATE TABLE, the Stitch integration fails to load data and raises an insufficient privileges error.
A Job design is shown below, using a tSetKeystore component to set the keystore file in the preJob, followed by using a tMysqlConnection to establish a MYSQL connection. However, MYSQL fails to connect.
Nevertheless, by changing the order of the components as demonstrated below, the MYSQL connection is successful.
To address this issue, you can choose from the following solutions without altering the order of the tSetKeyStore and tMysqlConnection components.
tSetKeyStore sets values for javax.net.ssl properties, thereby affecting the subsequent components. Most recent MySQL versions use SSL connections by default. Since the Java SSL environment has been modified, the MySQL JDBC driver inherits these changes from tSetKeyStore, which can potentially impact the connection.
This article briefly introduces how to config on audit.properties to generate audit log locally.
==audit.properties==
log.appender=file,console
appender.file.path=c:/tmp/audit/audit.json
Users assigned the Operator and Integration Developer roles may encounter the following error when attempting to connect to Talend Studio and fetch the license from Talend Management Console:
401 Authentication credentials were missing or incorrect
This issue occurs even though the user has the necessary roles to access Talend Studio. The root cause is typically missing Studio-specific permissions within the Operator role configuration.
To resolve the issue, update the Operator role permissions in Talend Management Console:
The Operator role does not have the required Studio and Studio - Develop permissions enabled. Without these permissions, authentication fails when Talend Studio attempts to validate credentials against Talend Management Console.
Symptoms
In Talend ESB Runtime environment, it fails to deploy new routes after nexus/jrog password update, although deploying and undeploying existing routes still work.
Ensure the correct login/password credentials are configured in ${M2_HOME}/conf/settings.xml
When updated the Nexus/JFrog credentials and modified the configuration at the known locations, the issue still persists.
It indicates that additional locations are still managing the authentication for JAR downloads.
Since org.ops4j.pax.url.mvn.cfg relies on ${M2_HOME}/conf/settings.xml for Maven authentication, that file also needed to be reviewed.
Check locations:
runtime\data\cache\org.eclipse.osgi\19\data\state.json
runtime\etc\org.ops4j.pax.url.mvn.cfg (org.ops4j.pax.url.mvn.repositories)
Location initially missed:
${M2_HOME}/conf/settings.xml and ${M2_HOME} is the path to the Maven installation directory.
A Job design is presented below:
tSetKeystore: set the Kafka truststore file.
tKafkaConnection, tKafkaInput: connect with Kafka Cluster as a Consumer and transmits messages.
However, while running the Job, an error exception occurred under the tKafkaInput component.
org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
Make sure to execute the tSetKeyStore component prior to the Kafka components to enable the Job to locate the certificates required for the Kafka connection. To achieve this, connect the tSetKeystore component to tkafkaConnection using an OnSubjobOK link, as demonstrated below:
For more detailed information on trigger connectors, specifically OnSubjobOK and OnComponentOK, please refer to this KB article: What is the difference between OnSubjobOK and OnComponentOK?.
We often see new users implement Qlik Stitch in under five minutes. While the user experience for adding a data source and a destination is simple, there is a lot of complexity behind the scenes.
Let’s pull the curtains back and see how each record gets from its source to its destination through Qlik Stitch’s data pipeline.
The journey typically begins in a SaaS application or database, where it’s either pushed to Stitch via our API or through a webhook, or pulled on a schedule by the Singer-based replication engine that Qlik Stitch runs against data sources like APIs, databases, and flat files.
The image below outlines the internal architecture. For a full view, click on the image or download it.
The data’s next stop is the Import API, a Clojure web service that accepts JSON and Transit, either point-at-a-time or in large batches. The Import API does a validation check and an authentication check on the request’s API token before writing the data to a central Apache Kafka queue.
At this point, Qlik Stitch has accepted the data. Qlik's system is architected to meet our most important service-level target: don’t lose data. To meet this goal, we replicate our Kafka cluster across three different data centers and require each data point to be written to two of them before it is accepted. Should the write fail, the requester will try again until it’s successful.
Under normal conditions, data is read off the queue seconds later by the streamery, a multithreaded Clojure application that writes the data to files on S3 in batches, separated according to the database tables the data is destined for. We have the capacity to retain data in Kafka for multiple days to ensure nothing is lost in the event that downstream processing is delayed. The streamery cuts batches after reaching either a memory limit or an amount of time elapsed since the last batch. Its low-latency design aims to maximize throughput while guarding against data loss or data leaking between data sets.
Batches that have been written to S3 enter the spool, which is a queue of work waiting to be processed by one of our loaders. These Clojure applications read data from S3 and do any processing necessary (such as converting data into the appropriate data types and structure for the destination) before loading the data into the customer’s data warehouse. We currently have loaders for Redshift, Postgres, BigQuery, Snowflake, and S3. Each is a separate codebase and runtime because of the variation in preprocessing steps required for each destination. Operating them separately also allows each to scale and fail independently, which is important when one of the cloud-based destinations has downtime or undergoes maintenance.
All of this infrastructure allows us to process more than a billion records per day, and allows our customers to scale their data volumes up or down by more than 100X at any time. Qlik Stitch customers don’t need to worry about any of this, however. They just connect a source, connect a destination, and then let Qlik Stitch worry about making sure the data ends up where it needs to be.
For additional information, reference the official Stitch documentation on Getting Started.