Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Qlik offers a wide range of channels to assist you in troubleshooting, answering frequently asked questions, and getting in touch with our technical experts. In this article, we guide you through all available avenues to secure your best possible experience.
For details on our terms and conditions, review the Qlik Support Policy.
Index:
We're happy to help! Here's a breakdown of resources for each type of need.
| Support | Professional Services (*) | |
| Reactively fixes technical issues as well as answers narrowly defined specific questions. Handles administrative issues to keep the product up-to-date and functioning. | Proactively accelerates projects, reduces risk, and achieves optimal configurations. Delivers expert help for training, planning, implementation, and performance improvement. | |
|
|
(*) reach out to your Account Manager or Customer Success Manager
Your first line of support: https://community.qlik.com/
Looking for content? Type your question into our global search bar:
Leverage the enhanced and continuously updated Knowledge Base to find solutions to your questions and best practice guides. Bookmark this page for quick access!
Subscribe to maximize your Qlik experience!
The Support Updates Blog
The Support Updates blog delivers important and useful Qlik Support information about end-of-product support, new service releases, and general support topics. (click)
The Qlik Design Blog
The Design blog is all about product and Qlik solutions, such as scripting, data modelling, visual design, extensions, best practices, and more! (click)
The Product Innovation Blog
By reading the Product Innovation blog, you will learn about what's new across all of the products in our growing Qlik product portfolio. (click)
Q&A with Qlik
Live sessions with Qlik Experts in which we focus on your questions.
Techspert Talks
Techspert Talks is a free webinar to facilitate knowledge sharing held on a monthly basis.
Technical Adoption Workshops
Our in depth, hands-on workshops allow new Qlik Cloud Admins to build alongside Qlik Experts.
Qlik Fix
Qlik Fix is a series of short video with helpful solutions for Qlik customers and partners.
Suggest an idea, and influence the next generation of Qlik features!
Search & Submit Ideas
Ideation Guidelines
Get the full value of the community.
Register a Qlik ID:
Incidents are supported through our Chat, by clicking Chat Now on any Support Page across Qlik Community.
To raise a new issue, all you need to do is chat with us. With this, we can:
Log in to manage and track your active cases in the Case Portal. (click)
Please note: to create a new case, it is easiest to do so via our chat (see above). Our chat will log your case through a series of guided intake questions.
When creating a case, you will be prompted to enter problem type and issue level. Definitions shared below:
Select Account Related for issues with your account, licenses, downloads, or payment.
Select Product Related for technical issues with Qlik products and platforms.
If your issue is account related, you will be asked to select a Priority level:
Select Medium/Low if the system is accessible, but there are some functional limitations that are not critical in the daily operation.
Select High if there are significant impacts on normal work or performance.
Select Urgent if there are major impacts on business-critical work or performance.
If your issue is product related, you will be asked to select a Severity level:
Severity 1: Qlik production software is down or not available, but not because of scheduled maintenance and/or upgrades.
Severity 2: Major functionality is not working in accordance with the technical specifications in documentation or significant performance degradation is experienced so that critical business operations cannot be performed.
Severity 3: Any error that is not Severity 1 Error or Severity 2 Issue. For more information, visit our Qlik Support Policy.
If you require a support case escalation, you have two options:
When other Support Channels are down for maintenance, please contact us via phone for high severity production-down concerns.
A collection of useful links.
Qlik Cloud Status Page
Keep up to date with Qlik Cloud's status.
Support Policy
Review our Service Level Agreements and License Agreements.
Live Chat and Case Portal
Your one stop to contact us.
Usually, to generate a key file for key-pair authentication to the snowflake, it needs to use key pair authentication and key pair rotation.
For more information, please refer to documentation about: key-pair-auth | docs.snowflake.com
How to create a key file for Qlik Talend Data Catalog that use for key-pair authentication to the snowflake?
Since Qlik Talend Data Catalog only support PKCS#8 version 1 encryption with PBE-SHA1-3DES (-v1 option) for the moment, please use a sample command below to generate the keystore file via OpenSSL:
openssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -v1 PBE-SHA1-3DES -out rsa_key.p8
TALMM-6182
#Talend Data Catalog
It encounters the following loading error with Snowflake destination:
Insufficient privileges to operate on database '<DATABASE_NAME>'
Follow up documentation about connecting-a-snowflake-data-warehouse-to-stitch#create-database-and-use(Qlik Stitch Documentation) to create the database and user for Snowflake.
Further Troubleshooting
If you still see the error after following the setup guide, please try to use below steps
SHOW DATABASES LIKE '<DATABASE_NAME>';
SHOW GRANTS TO ROLE STITCH_ROLE;
ALTER USER STITCH_USER SET DEFAULT_ROLE = STITCH_ROLE;
If any required privileges are missing, grant them using an ACCOUNTADMIN role:
GRANT USAGE ON DATABASE <DATABASE_NAME> TO ROLE STITCH_ROLE;
GRANT CREATE SCHEMA ON DATABASE <DATABASE_NAME> TO ROLE STITCH_ROLE;
This error occurs because the Stitch Snowflake user does not have the required privileges on the target database.
Snowflake access control is role-based. Even if a database exists and is accessible under another role (e.g., ACCOUNTADMIN), the Stitch role must explicitly be granted privileges to use and create objects within that database.
When the Stitch role does not own the database and lacks privileges such as USAGE, CREATE SCHEMA, or CREATE TABLE, the Stitch integration fails to load data and raises an insufficient privileges error.
A Job design is shown below, using a tSetKeystore component to set the keystore file in the preJob, followed by using a tMysqlConnection to establish a MYSQL connection. However, MYSQL fails to connect.
Nevertheless, by changing the order of the components as demonstrated below, the MYSQL connection is successful.
To address this issue, you can choose from the following solutions without altering the order of the tSetKeyStore and tMysqlConnection components.
tSetKeyStore sets values for javax.net.ssl properties, thereby affecting the subsequent components. Most recent MySQL versions use SSL connections by default. Since the Java SSL environment has been modified, the MySQL JDBC driver inherits these changes from tSetKeyStore, which can potentially impact the connection.
This article briefly introduces how to config on audit.properties to generate audit log locally.
==audit.properties==
log.appender=file,console
appender.file.path=c:/tmp/audit/audit.json
Users assigned the Operator and Integration Developer roles may encounter the following error when attempting to connect to Talend Studio and fetch the license from Talend Management Console:
401 Authentication credentials were missing or incorrect
This issue occurs even though the user has the necessary roles to access Talend Studio. The root cause is typically missing Studio-specific permissions within the Operator role configuration.
To resolve the issue, update the Operator role permissions in Talend Management Console:
The Operator role does not have the required Studio and Studio - Develop permissions enabled. Without these permissions, authentication fails when Talend Studio attempts to validate credentials against Talend Management Console.
Symptoms
In Talend ESB Runtime environment, it fails to deploy new routes after nexus/jrog password update, although deploying and undeploying existing routes still work.
Ensure the correct login/password credentials are configured in ${M2_HOME}/conf/settings.xml
When updated the Nexus/JFrog credentials and modified the configuration at the known locations, the issue still persists.
It indicates that additional locations are still managing the authentication for JAR downloads.
Since org.ops4j.pax.url.mvn.cfg relies on ${M2_HOME}/conf/settings.xml for Maven authentication, that file also needed to be reviewed.
Check locations:
runtime\data\cache\org.eclipse.osgi\19\data\state.json
runtime\etc\org.ops4j.pax.url.mvn.cfg (org.ops4j.pax.url.mvn.repositories)
Location initially missed:
${M2_HOME}/conf/settings.xml and ${M2_HOME} is the path to the Maven installation directory.
A Job design is presented below:
tSetKeystore: set the Kafka truststore file.
tKafkaConnection, tKafkaInput: connect with Kafka Cluster as a Consumer and transmits messages.
However, while running the Job, an error exception occurred under the tKafkaInput component.
org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
Make sure to execute the tSetKeyStore component prior to the Kafka components to enable the Job to locate the certificates required for the Kafka connection. To achieve this, connect the tSetKeystore component to tkafkaConnection using an OnSubjobOK link, as demonstrated below:
For more detailed information on trigger connectors, specifically OnSubjobOK and OnComponentOK, please refer to this KB article: What is the difference between OnSubjobOK and OnComponentOK?.
We often see new users implement Qlik Stitch in under five minutes. While the user experience for adding a data source and a destination is simple, there is a lot of complexity behind the scenes.
Let’s pull the curtains back and see how each record gets from its source to its destination through Qlik Stitch’s data pipeline.
The journey typically begins in a SaaS application or database, where it’s either pushed to Stitch via our API or through a webhook, or pulled on a schedule by the Singer-based replication engine that Qlik Stitch runs against data sources like APIs, databases, and flat files.
The image below outlines the internal architecture. For a full view, click on the image or download it.
The data’s next stop is the Import API, a Clojure web service that accepts JSON and Transit, either point-at-a-time or in large batches. The Import API does a validation check and an authentication check on the request’s API token before writing the data to a central Apache Kafka queue.
At this point, Qlik Stitch has accepted the data. Qlik's system is architected to meet our most important service-level target: don’t lose data. To meet this goal, we replicate our Kafka cluster across three different data centers and require each data point to be written to two of them before it is accepted. Should the write fail, the requester will try again until it’s successful.
Under normal conditions, data is read off the queue seconds later by the streamery, a multithreaded Clojure application that writes the data to files on S3 in batches, separated according to the database tables the data is destined for. We have the capacity to retain data in Kafka for multiple days to ensure nothing is lost in the event that downstream processing is delayed. The streamery cuts batches after reaching either a memory limit or an amount of time elapsed since the last batch. Its low-latency design aims to maximize throughput while guarding against data loss or data leaking between data sets.
Batches that have been written to S3 enter the spool, which is a queue of work waiting to be processed by one of our loaders. These Clojure applications read data from S3 and do any processing necessary (such as converting data into the appropriate data types and structure for the destination) before loading the data into the customer’s data warehouse. We currently have loaders for Redshift, Postgres, BigQuery, Snowflake, and S3. Each is a separate codebase and runtime because of the variation in preprocessing steps required for each destination. Operating them separately also allows each to scale and fail independently, which is important when one of the cloud-based destinations has downtime or undergoes maintenance.
All of this infrastructure allows us to process more than a billion records per day, and allows our customers to scale their data volumes up or down by more than 100X at any time. Qlik Stitch customers don’t need to worry about any of this, however. They just connect a source, connect a destination, and then let Qlik Stitch worry about making sure the data ends up where it needs to be.
For additional information, reference the official Stitch documentation on Getting Started.
This article addresses the error encountered during extraction when using log-based incremental replication for MySQL integration:
[main] tap-hp-mysql.sync-strategies.binlog - Fatal Error Occurred - <ColumnName> - decimal SQL type for value type class clojure.core$val is not implemented.
There are two recommended approaches:
Option 1: Enable Commit Order Preservation
Run the following command in your MySQL instance:
SET GLOBAL replica_preserve_commit_order = ON;
Then, reset the affected table(s) through the integration settings.
Option 2: Validate Replication Settings
Ensure that either:
replica_preserve_commit_order (MySQL 8.0+), orslave_preserve_commit_order (older versions)is enabled. These settings maintain commit order on multi-threaded replicas, preventing gaps and inconsistencies.
Run:
SHOW GLOBAL VARIABLES LIKE 'replica_preserve_commit_order';
Expected Output:
Variable_name |
Value |
replica_preserve_commit_order |
ON |
For older versions:
SHOW GLOBAL VARIABLES LIKE 'slave_preserve_commit_order';
For more information, reference MySQL Documentation
replication-features-transaction-inconsistencies | dev.mysql.com
When using log-based incremental replication, Stitch reads changes from MySQL’s binary log (binlog). This error occurs because the source database provides events out of order, which leads to mismatched data types during extraction. In this case, the extraction encounters a decimal SQL type where the value type is unexpected.
Why does this happen?
This article explains whether changing integration credentials or the host address for a database integration requires an integration reset in Stitch. It will also address key differences between key-based incremental replication and log-based incremental replication.
Updating credentials (e.g., username or password) does not require an integration reset. Stitch will continue replicating data from the last saved bookmark values for your tables according to the configured replication method.
Changing the host address is more nuanced and depends on the replication method:
Important:
If the database name changes, Stitch treats it as a new database:
| Change Type | Key-Based Replication | Log-Based Replication |
| Credentials | No reset required | No reset required |
| Host Address | No reset (if search path unchanged) | Reset required |
| Database Name | Reset required | Reset required |
MySQL extraction encounters the following error:
FATAL [main] tap-hp-mysql.main - Fatal Error Occurred - YEAR
YEAR(date_column) < 1 OR YEAR(date_column) > 9999
0000-00-00), adjust SQL mode or replace with valid dates.
This error occurs when the MySQL integration attempts to process a DATE, DATETIME, or TIMESTAMP field containing an invalid year value. Common examples include 0 or any year outside the supported range. The error message typically states "Fatal Error Occurred" followed by details about the invalid year or month value.
The underlying Python library used by the Stitch MySQL integration enforces strict date parsing rules. It only supports years in the range 0001–9999. If the source data contains values less than 0001 or greater than 9999, the extraction will error. This issue often arises from legacy data, zero dates (0000-00-00), or improperly validated application inserts.
Any column selected for replication that contains invalid date values will trigger this error.
Loading Error Across All Destinations
When Stitch tries to insert data into a destination table and encounters a NOT NULL constraint violation, the error message typically looks like:
ERROR: null value in column "xxxxx" of relation "xxxxx" violates not-null constraint
or
ERROR: null value in column "xxxxx" violates not-null constraint
Key Points
_sdc_level_1_id._sdc_level_id columns help form composite keys for nested records and are used to associate child records with their parent. Stitch generates these values sequentially for each unique record._sdc_source_key_[name] columns, they create a unique identifier for each row. Depending on nesting depth, multiple _sdc_level_id columns may exist in subtables.
Recommended Approach
Pause the integration, drop the affected table(s) from the destination, and reset the table from the Stitch UI. If you plan to change the PK on the table, you must either:
If residual data in the destination is blocking the load, manual intervention may be required. Contact Qlik Support if you need assistance clearing this data.
Primary Key constraints enforce both uniqueness and non-nullability. If a null value exists in a PK field, the database rejects the insert because Primary Keys cannot contain nulls.
If you suspect your chosen PK field may contain nulls, you can:
The NetSuite integration encounters the following extraction error:
[main] tap-netsuite.core - Fatal Error Occured - Request failed syncing stream Transaction, more info: :data {:messages ({:code {:value UNEXPECTED_ERROR}, :message "An unexpected error occurred. Error ID: <ID>", :type {:value ERROR}})}
The extraction error message provides limited context beyond the NetSuite Error ID. It is recommend reaching out to NetSuite Support with the Error ID for further elaboration and context.
This error occurs when NetSuite’s API returns an UNEXPECTED_ERROR during pagination while syncing a stream. It typically affects certain records within the requested range and is triggered by problematic records or internal processing issues during large result set pagination.
Potential contributing factors include
Talend Studio opens very quickly, but when attempting to open a Job, the process becomes very slow, and the following error messages are displayed in the .log file:
: !STACK 0 java.lang.IllegalStateException: java.util.concurrent.TimeoutException: Timeout when waiting for component server initialization: -Dtalend.studio.sdk.startup.timeout=2
Prevent the infosec software from redirecting the port to bind.
The operation manager enabled feature from the cybersecurity software that allows to redirect the request.
When deleleted one user and created a new one , this newly created user could not see campaigns in Talend Data Stewardship UI.
How to update Talend Data Stewardship campaigns to new owner accounts if Talend Data Stewardship UI could not list campaign items?
It need to remap campaigns to new owners by updating tds_campaigns with owners field
/opt/TalendHybrid-8.0.1/mongodb/bin/mongo tds -u tds-user -p duser
use tds
db.tds_campaigns.updateMany({}, { $set:{owners: ["be0d6211-41e9-48ba-a711-3427c2c3b912","5948a32c-9f22-4bee-a0f0-4c4267927f81","2ebf7ddc-3df7-4cb8-a9fe-f3f4c3074e5d","d04792a2-8714-4681-841d-b49f51db8b4b"]}})
It can also get the owner ID by the Talend Data Stewardship API.
For more information, please refer to documentation about:
accessing-talend-data-stewardship-rest-api-documentation | Qlik Talend Help
Question
Is there a way to stop task in Talend Management Console from using Cloud Engine by default when publishing it to Qlik Talend Cloud?
The answer is YES.
In order to stop the Cloud Engine from being used, please change the environment that the task is being run into use Zero cloud engines. In the environments page, set the Number of allocated Cloud Engines to Zero. This will make the environment have Zero cloud engine, and in this way, the job in the environment will not use any cloud engine to run the task.
AllocatedCloudEngines
It is getting the below error message when logging into Qlik Talend Studio.
Exception during Initialization
java.util.concurrent.TimeoutException: Timeout when waiting for component server initialization: -Dtalend.studio.sdk.startup.timeout=2
TimeoutError
Increase the Timeout
-Dtalend.studio.sdk.startup.timeout=60 You can set 30–60 seconds depending on your environment
Increase Memory Allocation
In the same Talend-Studio.ini file, adjust the -Xmx value (default is often 1536m).
-Xms1024m
-Xmx4096m
This gives Studio more heap space, reducing startup delays.
Potential Checklist
Question
In Qlik Talend Studio, generally we use connection components to re-use the connection in job design. You may encounter some confusion about when using talend specific DB connectors, such as tsnowflakeconnection, tMySqlConnection and when using the generic tJDBCConnection component in a job?
It depends on your job requirements and use cases.
DB Native Components
For the generic JDBC component, you need to select the database type and its corresponding JDBC driver. It will serve as an entry point for the following databases tdbconnection | Qlik Talend Help and it is recommended to use DB native drivers to avoid unnecessary translation of JDBC to DB Specific calls.
NativeDBConnectionComponent
tJDBCConnection
For some use case, for example, if you need to check “Use or register a shared db connection”, since the tSnowflakeConnection component doesn't have a shared connection option, so you can't pass a connection from father to child job with a shared connection.
For more information about this feature, please refer to Qlik Help Site below:
sharing-database-connection | Qlik Help
tSnowflakeConnection component can use a shared connection as of Talend Studio R2025-04.
The jobs will be much more portable if you combine this with context variables for jdbc connections and configuration instead of relaying on specific database components. The tjdbcconnection component gets more options like the generic shared connection one, bulk load processing and it is a specific version of a dynamic database connector which uses JDBC URL to create the database connection.
tJDBCConnection
It is getting Java Error Response from JSON results back when using tWriteJSONField to POST data from postman in Talend 8, JDK 11.
superclass access check failed: class nu.xom.JDK15XML1_0Parser (in unnamed module @xxxxx) cannot access class com.sun.org.apache.xerces.internal.parsers.SAXParser (in module java.xml) because module java.xml does not export com.sun.org.apache.xerces.internal.parsers to unnamed module @xxxxx
Talend Studio
Go to Studio ->Project setting -> Build-> Java version-> Module access Settings-> Custom
GLOBAL=java.xml/com.sun.org.apache.xerces.internal.parsers, java.xml/com.sun.org.apache.xerces.internal.util
GlobalModelAccessSettings
Talend Remote Engine
When Job was built by jdk 8/11, which also need neccessary configurations to let Talend Remote Engine support jdk 8/11.
In the <RE_installation>/etc/system.properties file, set the org.talend.execution.JAVA_*_PATH properties with the paths to your Java installations.
The following configuration is a feature introduced since R2025-03 for old task execution compatibility.
org.talend.execution.JAVA_8_PATH=/path/to/java8/bin
org.talend.execution.JAVA_11_PATH=/path/to/java11/bin
org.talend.execution.JAVA_17_PATH=/path/to/java17/bin
In the meanwhile, please consider jobs migration to jdk 17 , since jdk 17 would be the only support jdk version in next few years.
It is a compilation error and task execution compatibility issue.
specify-another-jvm-to-launch-studio | Qlik Talend Help
configure-java-versions-for-job-execution-or-microservice-execution | Qlik Talend Help