Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Qlik offers a wide range of channels to assist you in troubleshooting, answering frequently asked questions, and getting in touch with our technical experts. In this article, we guide you through all available avenues to secure your best possible experience.
For details on our terms and conditions, review the Qlik Support Policy.
Index:
We're happy to help! Here's a breakdown of resources for each type of need.
Support | Professional Services (*) | |
Reactively fixes technical issues as well as answers narrowly defined specific questions. Handles administrative issues to keep the product up-to-date and functioning. | Proactively accelerates projects, reduces risk, and achieves optimal configurations. Delivers expert help for training, planning, implementation, and performance improvement. | |
|
|
(*) reach out to your Account Manager or Customer Success Manager
Your first line of support: https://community.qlik.com/
Looking for content? Type your question into our global search bar:
Leverage the enhanced and continuously updated Knowledge Base to find solutions to your questions and best practice guides. Bookmark this page for quick access!
Subscribe to maximize your Qlik experience!
The Support Updates Blog
The Support Updates blog delivers important and useful Qlik Support information about end-of-product support, new service releases, and general support topics. (click)
The Qlik Design Blog
The Design blog is all about product and Qlik solutions, such as scripting, data modelling, visual design, extensions, best practices, and more! (click)
The Product Innovation Blog
By reading the Product Innovation blog, you will learn about what's new across all of the products in our growing Qlik product portfolio. (click)
Q&A with Qlik
Live sessions with Qlik Experts in which we focus on your questions.
Techspert Talks
Techspert Talks is a free webinar to facilitate knowledge sharing held on a monthly basis.
Technical Adoption Workshops
Our in depth, hands-on workshops allow new Qlik Cloud Admins to build alongside Qlik Experts.
Qlik Fix
Qlik Fix is a series of short video with helpful solutions for Qlik customers and partners.
Suggest an idea, and influence the next generation of Qlik features!
Search & Submit Ideas
Ideation Guidelines
Get the full value of the community.
Register a Qlik ID:
Incidents are supported through our Chat, by clicking Chat Now on any Support Page across Qlik Community.
To raise a new issue, all you need to do is chat with us. With this, we can:
Log in to manage and track your active cases in the Case Portal. (click)
Please note: to create a new case, it is easiest to do so via our chat (see above). Our chat will log your case through a series of guided intake questions.
When creating a case, you will be prompted to enter problem type and issue level. Definitions shared below:
Select Account Related for issues with your account, licenses, downloads, or payment.
Select Product Related for technical issues with Qlik products and platforms.
If your issue is account related, you will be asked to select a Priority level:
Select Medium/Low if the system is accessible, but there are some functional limitations that are not critical in the daily operation.
Select High if there are significant impacts on normal work or performance.
Select Urgent if there are major impacts on business-critical work or performance.
If your issue is product related, you will be asked to select a Severity level:
Severity 1: Qlik production software is down or not available, but not because of scheduled maintenance and/or upgrades.
Severity 2: Major functionality is not working in accordance with the technical specifications in documentation or significant performance degradation is experienced so that critical business operations cannot be performed.
Severity 3: Any error that is not Severity 1 Error or Severity 2 Issue. For more information, visit our Qlik Support Policy.
If you require a support case escalation, you have two options:
When other Support Channels are down for maintenance, please contact us via phone for high severity production-down concerns.
A collection of useful links.
Qlik Cloud Status Page
Keep up to date with Qlik Cloud's status.
Support Policy
Review our Service Level Agreements and License Agreements.
Live Chat and Case Portal
Your one stop to contact us.
You are experiencing a sudden failure of connections to data sources such as MySQL, Qlik Cloud, and Microsoft SQL Server in Talend Data Catalog. These connections were previously working correctly. When you test the connection, you receive the following error message:
"An error occurred in the remote service [-1,1] - MIMB execution thread ( ) was not found"
This issue can occur even if you can locally access the data sources with other tools, such as DBeaver.
The error message indicates that the Talend Data Catalog application is unable to connect to the bridge server. This is because the Remote Harvest Agent required to access your local data sources is either missing, has been deleted, or is not properly configured. The default server running in the cloud does not have access to data sources behind your firewall.
To resolve this issue, you need to install and configure a new Remote Harvest Agent. This agent can be installed on the same server as your data source (e.g., MySQL) or on another machine that has access to it.
Here are the steps to follow:
Install the Remote Harvest Agent:
Configure the New Agent in Talend Data Catalog:
Use the New Agent for Harvesting:
These instructions are the same for both the cloud and on-premise versions of Talend Data Catalog.
Additional Information
For more detailed instructions, you can refer to the following documentation:
Note: Although these links pertain to version 8.0, the process remains identical for version 8.1.
When using a Microsoft Azure ADLS as a target in a Qlik Replicate task, the Full Load data are written to CSV, TEXT, or JSON files (depending on the endpoint settings). The Full Load Files are named using incremental counters e.g. LOAD00000001.csv, LOAD00000002.csv. This is the default behavior.
In some scenarios, you may want to use the table name as the file name rather than LOAD########.
This article describes how to rename the output files from LOAD######## to <schemaName>_<tableName>__######## format while Qlik Replicate running on a Windows platform.
In this article, we will focus on cloud types of target endpoint (ADLS, S3, etc...) The example uses Microsoft Azure ADLS which locates remote cloud storage.
This customization is provided as is. Qlik Support cannot provide continued support for the solution. For assistance, reach out to Professional Services.
@Echo on
for %%a in (%1) do set "fn=%%~na"
echo %fn%
set sn=%fn:~4,8%
echo %sn%
az storage fs file move -p %1 -f johwg --new-path johwg/demo/%2.%3/%2_%3__%sn%.csv --account-name mydemoadlsgen2johwg --account-key Wbq5bFUohzfg2sPe7YW6azySm24xp4UdrTnuDSbacMi44fkn4UqawlwZCcn2vdlm/2u70al/vsWF+ASttoClUg==
where johwg is the Container Name. account-name and account-key are used to connect to ADLS storage. The values are obfuscated in the above sample.
General
Storage Type : Azure Data Lake Storage (ADLS) Gen2
Container : johwg
Target folder : /demo
Advanced
Post Upload Processing, choose "Run command after upload"
Command name : myrename3_adls.bat
Working directory: leave blank
Parameters : ${FILENAME} ${TABLE_OWNER} ${TABLE_NAME}
Qlik Replicate
Microsoft Azure ADLS target
Qlik Replicate and File target: How to rename output files LOAD######## to table name format on Windows
Qlik Replicate and File target: How to rename output files LOAD######## to table name format on Linux
The reload task fails with a message like this in the document log:
or
2017-11-10 10:16:48 0454 WHERE isnum(Sequence#)
2017-11-10 10:16:48 Error: Field 'Sequence#' not found
2017-11-10 10:16:48 Execution Failed
2017-11-10 10:16:48 Execution finished.
or
Sequence# field not found in 'lib://SHARE/Repository/Trace/SERVERNAME_Synchronization_Repository.txt'
The steps below apply where it cannot find any field. The field that cannot be found includes but is not limited to CounterName, ProxySessionID.
Environment:
QLIK-35804: Occasionally when Qlik Sense services stop, they do not fully write to the logs in the expected format.
Restart the Qlik Sense services
Modify the License and Operations Monitor apps such that it will continue parsing logs even if it fails to fully parse a particular log.
//begin ignoring errors parsing logs set errormode = 0;and
//end ignoring errors parsing logs set errormode = 1;This will look something like this:
Watch this space for when the feature has been successfully rolled out in your region.
This capability is being rolled out across regions over time:
With the introduction of shared automations, it will be possible to create, run, and manage automations in shared spaces.
Limit the execution of an automation to specific users.
Every automation has an owner. When an automation runs, it will always run using the automation connections configured by the owner. Any Qlik connectors that are used will use the owner's Qlik account. This guarantees that the execution happens as the owner intended it to happen.
The user who created the run, along with the automation's owner at run time, are both logged in the automation run history.
These are five options on how to run an automation:
Collaborate on an automation through duplication.
Automations are used to orchestrate various tasks; from Qlik use cases like reload task chaining, app versioning, or tenant management, to action-oriented use cases like updating opportunities in your CRM, managing supply chain operations, or managing warehouse inventories.
To prevent users from editing these live automations, we're putting forward a collaborate through duplication approach. This makes it impossible for non-owners to change an automation that can negatively impact operations.
When a user duplicates an existing automation, they will become the owner of the duplicate. This means the new owner's Qlik account will be used for any Qlik connectors, so they must have sufficient permissions to access the resources used by the automation. They will also need permissions to use the automation connections required in any third-party blocks.
Automations can be duplicated through the context menu:
As it is not possible to display a preview of the automation blocks before duplication, please use the automation's description to provide a clear summary of the purpose of the automation:
The Automations Activity Centers have been expanded with information about the space in which an automation lives. The Run page now also tracks which user created a run.
Note: Triggered automation runs will be displayed as if the owner created them.
The Automations view in Administration Center now includes the Space field and filter.
The Runs view in Administration Center now includes the Executed by and Space at runtime fields and filters.
The Automations view in Automations Activity Center now includes Space field and filter.
Note: Users can configure which columns are displayed here.
The Runs view in the Automations Activity Center now includes the Space at runtime, Executed by, and Owner fields and filters.
In this view, you can see all runs from automations you own as well as runs executed by other users. You can also see runs of other users's automations where you are the executor.
To see the full details of an automation run, go to Run History through the automation's context menu. This is also accessible to non-owners with sufficient permissions in the space.
The run history view will show the automation's runs across users, and the user who created the run is indicated by the Executed by field.
The metrics tab in the automations activity center has been deprecated in favor of the automations usage app which gives a more detailed view of automation consumption.
Question
When using Heruko PostgreSQL as a source integration, how to enter the Client Key which is required for the Mutual TLS (mTLS) authentication? Since the database requires mTLS to connect, is there any settings available for it in Stitch?
This feature is not supported right now and therefore there are no settings for it in Qlik Stitch. It is considered a New Feature Request.
Please find it here:
IdeaID:#492366_Stitch mTLS and Heroku integration
On the right side, click the thumbs up button icon under “Request Actions” to ensure that you let our product folks know you are interested in seeing this feature placed on the product roadmap for consideration.
After upgrading Qlik Sense Enterprise on Windows (example: February 2024 to November 2024), the Data Load editor fails to load.
The error:
Connection lost. Make sure that Qlik Sense is running properly. If your session has timed out due to inactivity, refresh to continue working.
Error Code: 16
The console log reads:
Error during WebSocket handshake: Unexpected response code:431
Adjust the MaxHttpHeaderSize as documented in Qlik Sense Client Managed: Adjust the MaxHttpHeaderSize for the Micro Services run by the ServiceDispatcher.
The console error 431 means Request Header Fields Too Large in HTTP.
Pressing a button that executes an Automation fails with:
Bad request
An additional error may be shown:
You are not authorized to run this automation
These errors are typically seen after changing the Automation's owner.
When the owner of an Automation is changed, the system automatically disables that Automation. In this context, this error indicates that the automation is disabled or that its run mode is incorrectly configured.
To resolve this:
This error can occur after re-enabling an automation and the execution token having changed.
Follow these steps to re-establish the connection:
Question
Does Qlik Stitch Support for Databricks Unity Catalog as a destination option?
We are learning that Databricks is requiring all customers migrate from Databricks Delta Lake to unity catalog.
So far, Qlik Stitch does not support Unity Catalog with Databricks as a destination option yet, and using it could result in loading errors or other problems.
It is considered a New Feature Request.
Please find it here:
IdeaID:#492357-Qlik Stitch Support for Databricks Unity Catalog as a destination option
On the right side, click the thumbs up button icon under “Request Actions” to ensure that you let our product folks know you are interested in seeing this feature placed on the product roadmap for consideration.
Databricks Delta Lake on AWS (v1) Data Loading Reference
Images included in an Excel worksheet may appear stretched or use the wrong aspect ratio when previewed in an Excel report or after being distributed in an on-demand reporting task.
This affects charts imported by Qlik Sense as well as static images and shapes.
To fix the affected Qlik Reporting template, change the body font currently used in the Excel template to a supported font family:
To prevent other related symptoms, set the main display zoom level to 100% when creating and editing an Excel template on the Windows platform. Note that this is a Windows OS setting and not the Zoom set within the Excel App.
An unsupported font is used in the current Qlik Reporting template.
SUPPORT-3842, QCB-32146
You may be experiencing an error " No import token found for connection" or the error "CRITICAL Error saving list of discovered streams: {'message': 'Not Authorized'}" when running extractions on Stitch integrations.
Integration will run fine and extraction jobs will not error out, however there no data is loading and in the extraction logs you will see the above error.
In order to resolve this, you will need to create a new integration as that establishes a new connection with a new import_token.
If the version of this integration is deprecated, creating a new integration will automatically use the latest version.
Upgrading the integration using the upgrade button (if one is available) or creating a new integration is the only path forward at this point to get your integrations running again.
Creating a new integration with a different schema name is recommended because you will benefit from a free historical re-sync of your data.
What is happening here is that this integration was created some years ago, and internally there is a token that expired. Specifically, the import_token for connections created over 5 years ago will expire at that 5 year time period and Stitch do not automatically generate a new import token. As a result, you will see this message in extractions.
If you prefer to re-use the same destination schema name when creating a new integration, please refer to this article:
Qlik-Stitch-How-to-Upgrade-the-Integration-to-Latest-Version
Unique Index not found for SAP Hana version 2.0 SPS 7 in Qlik Replicate. This version of SAP Hana reports the index type as "INVERTED VALUE" instead of "INVERTED VALUE UNIQUE" in SAP Hana.
This change in SAP has no direct impact, but other indirect changes have caused Qlik Replicate to be unable to find the Unique Indexes of tables from the table metadata for the SAP Hana source endpoint.
The missing Unique Indexes for the tables cause disruptions to the CDC process, as updates can not be applied without the Unique Index.
This defect (SUPPORT-372) has been resolved in the early Service Pack 3 for Qlik Replicate 2024.11.
Contact Qlik Support for access to the early Service Pack build.
Product Defect ID: SUPPORT-372
Opening QlikView Documents from the AccessPoint directly in the QlikView Desktop leads to the document being launched with the wrong (AJAX) client.
Previously, documents would automatically launch with qvp:// and use the Desktop client as the default.
Clicking the View details link lists only three clients:
Ajax client, Internet Explorer plugin, Mobile client
Previous versions would list the QlikView desktop client:
This behavior was changed between 12.70 and 12.80. To reenable the previous default:
If the steps did not succeed:
QCB-32143
In Talend tDBInput component, CLOB data types can not be handled properly when you are attempting to ingest data from source DB2 to Target one.
This article briefly introduces how to properly handle CLOB data types when ingesting from DB2 to Target using Talend.
The code below is a scratch version that needs testing and rewriting.
Use this as an example to study:
==Example code: === package routines; import java.sql.Clob;
import java.sql.SQLException;
import java.util.logging.Level;
import java.util.logging.Logger;
import org.apache.commons.io.IOUtils; package routines; import java.sql.Clob;
import java.sql.SQLException;
import java.util.logging.Level;
import java.util.logging.Logger;
import org.apache.commons.io.IOUtils; import java.io.IOException;
import java.io.InputStream;
import java.io.StringWriter;
import java.lang.String; public class ClobUtils { public static String getClobAsString(Object obj) {
String out = null;
if (obj != null && obj instanceof Clob) {
out=getClobAsString((Clob) obj);
} else {
Logger.getLogger("ClobUtils").log(Level.FINE, "null value");
}
return out;
} public static String getClobAsString(Clob clobObject) {
String clobAsString = null;
if (clobObject != null) {
long clobLength;
try {
clobLength = clobObject.length(); if (clobLength <= Integer.MAX_VALUE) {
clobAsString = clobObject.getSubString(1, (int) clobLength);
} else {
InputStream in = clobObject.getAsciiStream();
StringWriter w = new StringWriter();
IOUtils.copy(in, w, "UTF-8");
clobAsString = w.toString();
}
} catch (SQLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
return clobAsString;
}
}
When a user authenticates with SAML/JWT/Ticket, security rules based on the attributes from the SSO provider do not work and the attributes are not visible in the QMC under the User record.
Environments:
When a user authenticates with SAML, a list of attributes will be given to Qlik Sense based on what is set up in the virtual proxy. The attributes depend on the implementation.
However, these User attribute(s) returned from the SSO provider are only kept for the user session and are not stored/persisted in the Qlik Sense Repository Database. Therefore, they do not appear in the QMC like attributes synchronized via a UDC connection (data which is persisted to the database).
The central proxy virtual proxy cannot be deleted and virtual proxy prefix are unique per proxy.
In order to change to remove the prefix for a different virtual proxy to use another authentication method by default:
Note: If the virtual proxy made default is using SAML authentication, the SAML Assertion Consumer Service URL will also need to be updated in the Identity Provider configuration.
This article explains how the Qlik Reporting connector in Qlik Application Automation can be used to generate a bursted report that delivers recipient-specific data.
For more information on the Qlik Reporting connector, see this Reporting tutorial.
This article offers two examples where the recipient list and field for reduction are captured in an XLS file or a straight table in an app. Qlik Application Automation allows you to connect to a variety of data sources, including databases, cloud storage locations, and more. This allows you to store your recipient lists in the appropriate location and apply the concepts found in the examples below to create your reporting automation. By configuring the Start block's run mode, the reporting automations can be scheduled or driven from other business processes.
In this example, the email addresses of the recipients are stored in a straight table. Add a private sheet to your app and add a straight table to it. This table should contain the recipients' email address, name, and a value to reduce the app on. We won't go over the step-by-step creation of this automation since it's available as a template in the template picker under the name "Send a burst report to email recipients from a straight table".
Instead, a few key blocks of this template are discussed below.
In this example, the email addresses of the recipients are stored in an Excel file. This can be a simple file that contains one worksheet with headers on the first row (name, email & a value for reduction) and one record on each subsequent row.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
You may be experiencing a pattern of the Binlog error like below when extracting data from MySQl Integrations in Stitch.
Fatal Error Occurred - Binlog has expired for tables
The problem always stems from how did you configured certain parameters in your AWS RDS instance due to a lack of explicit specification.
As MySQL Integrations can be configured with AWS RDS instances, for MySQL integrations that are configured in this way, it means the requirements for the Binary Log Retention Period differs from that of standalone MySQL instances and this can cause issues if the Binary Log Retention Period is incorrectly or insufficiently configured.
The Binary Log Retention Period should be configured to a value of 168 hours or 7 days.
Running call
call mysql.rds_set_configuration('binlog retention hours', 168)
or
call mysql.rds_set_configuration('binlog retention hours', 7)
will fix the issue.
The Binary Log Retention Period is controlled by AWS RDS which functions differently and independently of a standalone MySQL instance.
If the Binary Log Retention Period is set to 0, this will cause issues.
There is a RDS specific stored procedure that governs bing log retention beyond traditional MySQL parameters given in the case.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql-stored-proc-configuring.html
This article provides step-by-step instructions to ensure your Talend Studio environment no longer uses the flagged spring-core-5.3.33.jar file and instead uses the updated spring-core-6.1.14.jar. It also explains how to verify your job builds are free of the old JAR.
These steps are only necessary in cases where Qlik Talend Studio did not remove the old jar when applying the relevant patch (2025-02 or newer).
Create a backup copy of the JAR before deletion, in case you need to restore it later.
Cannot log in to Qlik Talend Administration Center. Login fails with:
Error: 500, The call failed on the server; see server log for details
The log file reports:
ERROR SqlExceptionHelper - The MySQL server is running with the LOCK_WRITE_GROWTH option so it cannot execute this statement
Expand cloud storage for MySQL or perform a data cleanup to free up the space required.
Cloud MySQL has reached the cloud storage quota during data writes.
Using a Signed License Key with its Signed License Definition in a long term offline environment past the 90 days provided by Delayed Sync requires (besides license modification) additional configuration steps!
These changes will need to be done on all nodes running the Service Dispatcher. Not only the Central node.
Once the changes has been done you will need to retrieve the updated SLD key from https://license.qlikcloud.com/sld and then apply the same for successful offline activation.
Note on upgrading: If using a version of Qlik Sense prior to November 2022, this file may be overwritten during an upgrade. Please be sure to re-apply this parameter and restart the Service Dispatcher on all nodes after an upgrade. With Qlik Sense November 2022 or later, custom service settings are by default kept during the upgrade. See Considerations about custom configurations.
QB-25231