Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Qlik offers a wide range of channels to assist you in troubleshooting, answering frequently asked questions, and getting in touch with our technical experts. In this article, we guide you through all available avenues to secure your best possible experience.
For details on our terms and conditions, review the Qlik Support Policy.
Index:
We're happy to help! Here's a breakdown of resources for each type of need.
Support | Professional Services (*) | |
Reactively fixes technical issues as well as answers narrowly defined specific questions. Handles administrative issues to keep the product up-to-date and functioning. | Proactively accelerates projects, reduces risk, and achieves optimal configurations. Delivers expert help for training, planning, implementation, and performance improvement. | |
|
|
(*) reach out to your Account Manager or Customer Success Manager
Your first line of support: https://community.qlik.com/
Looking for content? Type your question into our global search bar:
Leverage the enhanced and continuously updated Knowledge Base to find solutions to your questions and best practice guides. Bookmark this page for quick access!
Subscribe to maximize your Qlik experience!
The Support Updates Blog
The Support Updates blog delivers important and useful Qlik Support information about end-of-product support, new service releases, and general support topics. (click)
The Qlik Design Blog
The Design blog is all about product and Qlik solutions, such as scripting, data modelling, visual design, extensions, best practices, and more! (click)
The Product Innovation Blog
By reading the Product Innovation blog, you will learn about what's new across all of the products in our growing Qlik product portfolio. (click)
Q&A with Qlik
Live sessions with Qlik Experts in which we focus on your questions.
Techspert Talks
Techspert Talks is a free webinar to facilitate knowledge sharing held on a monthly basis.
Technical Adoption Workshops
Our in depth, hands-on workshops allow new Qlik Cloud Admins to build alongside Qlik Experts.
Qlik Fix
Qlik Fix is a series of short video with helpful solutions for Qlik customers and partners.
Suggest an idea, and influence the next generation of Qlik features!
Search & Submit Ideas
Ideation Guidelines
Get the full value of the community.
Register a Qlik ID:
Incidents are supported through our Chat, by clicking Chat Now on any Support Page across Qlik Community.
To raise a new issue, all you need to do is chat with us. With this, we can:
Log in to manage and track your active cases in the Case Portal. (click)
Please note: to create a new case, it is easiest to do so via our chat (see above). Our chat will log your case through a series of guided intake questions.
When creating a case, you will be prompted to enter problem type and issue level. Definitions shared below:
Select Account Related for issues with your account, licenses, downloads, or payment.
Select Product Related for technical issues with Qlik products and platforms.
If your issue is account related, you will be asked to select a Priority level:
Select Medium/Low if the system is accessible, but there are some functional limitations that are not critical in the daily operation.
Select High if there are significant impacts on normal work or performance.
Select Urgent if there are major impacts on business-critical work or performance.
If your issue is product related, you will be asked to select a Severity level:
Severity 1: Qlik production software is down or not available, but not because of scheduled maintenance and/or upgrades.
Severity 2: Major functionality is not working in accordance with the technical specifications in documentation or significant performance degradation is experienced so that critical business operations cannot be performed.
Severity 3: Any error that is not Severity 1 Error or Severity 2 Issue. For more information, visit our Qlik Support Policy.
If you require a support case escalation, you have two options:
When other Support Channels are down for maintenance, please contact us via phone for high severity production-down concerns.
A collection of useful links.
Qlik Cloud Status Page
Keep up to date with Qlik Cloud's status.
Support Policy
Review our Service Level Agreements and License Agreements.
Live Chat and Case Portal
Your one stop to contact us.
When using SAML or ticket authentication in Qlik Sense, some users belonging to a big number of groups see the error 431 Request header fields too large on the hub and cannot proceed further.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
The default setting will still be a header size of 8192 bytes. The fix adds support for a configurable MaxHttpHeaderSize.
Steps:
[globals]
LogPath="${ALLUSERSPROFILE}\Qlik\Sense\Log"
(...)
MaxHttpHeaderSize=16384
Note: The above value (16384) is an example. You may need to put more depending on the total number of characters of all the AD groups to which the user belongs. The max value is 65534.
Qlik Sense Enterprise on Windows
From Qlik Sense February 2023 onwards, apps listed in the Qlik Sense Management Console (Apps view) are now represented with clickable links. Clicking them will open the app directly on the hub.
This means the app names are no longer plain grey text but are now formatted as links (blue, underline). See Apps for details.
Example:
To disable clickable links in the Management Console:
Using only a custom virtual proxy prefix is currently not compatible. The default virtual proxy prefix must be accessible as the link generated will always refer to it.
Qlik is actively working on improving this feature: SHEND-1197.
Qlik Sense Enterprise on Windows February 2023 and above
This article is intended to get started with the Microsoft Outlook 365 connector in Qlik Application Automation.
To authenticate with Microsoft Outlook 365 you create a new connection. The connector makes use of OAuth2 for authentication and authorization purposes. You will be prompted with a popup screen to consent a list of permissions for Qlik Application Automation to use. The Oauth scopes that are requested are:
The scope of this connector has been limited to only sending emails. Currently, we do not enable sending email attachments and are looking to provide this functionality in the future. The suggested approach is to upload files to a different platform, e.g. Onedrive or Dropbox and create a sharing link that can be included in the email body.
The following parameters are available on the Send Email block:
As we do not currently support email attachments, we need to first generate a sharing link in Onedrive or an alternative file sharing service. The following automation shows how to generate a report from a Qlik Sense app, upload the report to Microsoft Onedrive, create a sharing link and send out an email with the sharing link in the body. This automation is also attached as JSON in the attachment to this post.
Talend Data Stewardship Installer for on-prem and Hybrid version could not redirect the gateway.url to correct host from IAM server or Cloud Server and got error page.
Somehow, it points to localhost instead of the IP Server or HostName specified in the TMC->Configuration->TDS setting.
Temporary Workaround
Final Solution
Wait for new installer versions released after R2025-05
It is a known bug from Installer level that IP/Host did not be propagated to the config of the Installed version of Talend Data Stewardship.
Internal Jira case ID: SUPPORT-3964
Dates are processed to the target as 01.01.0101.
This occurs when running a Full Load from a SAP Hana Source Endpoint while the Log-based CDC Mode is set on the SAP Hana task and the DATE columns are empty.
This behavior has been identified as a defect (RECOB-9893) in Qlik Replicate 2024.11 SP02 and previous versions.
Install the latest Service Pack 2024.11 SP03 or any later releases.
If the latest Service Release is not available containing the Patch, contact Qlik Support.
RECOB-9893
Old versions of RabbitMQ include multiple vulnerabilities. This article covers:
They do not impact Qlik NPrinting.
Qlik NPrinting 2023 versions and later use RabbitMQ 3.12, which is not affected by these vulnerabilities.
A binary load command that refers to the app ID (example Binary[idapp];) does not work and fails with:
General Script Error
or
Binary load fails with error Cannot open file
Before Qlik Sense Enterprise on Windows November 2024 Patch 8, the Qlik Engine permitted an unsupported and insecure method of binary loading from applications managed by Qlik Sense Enterprise on Windows.
Due to security hardening, this unsupported and insecure action is now denied.
Binary loads of Qlik Sense applications require a QVF file extension. In practice, this will require exporting the Qlik Sense app from the Qlik Sense Enterprise on Windows site to a folder location from which a binary load can be performed. See Binary Load and Limitations for details.
Example of a valid binary load:
Binary [lib://My_Extract_Apps/Sales_Model.qvf];
Example of an invalid binary load:
"Binary [lib://Apps/777a0a66-555x-8888-xx7e-64442fa4xxx44];"
A Qlik NPrinting task contains a report with filters applied, either on the report itself, the recipients, or the task itself.
Not all recipients receive the report, or the Qlik NPrinting task stalls.
The log files display the following error:
Filters for request xxx produce an invalid selection.
Other related errors that may appear in the Engine logs when running reports with invalid selections include:
ERROR: Error processing request of type Qlik.Reporting.Engine.Messages.Requests.FilterMaterializationRequest for sense app <app id>. ERROR: Cannot apply filter Filters:...
Bookmark: , Void: False to current document data. Requested fields with evaluates are: F\<field name>
...
Error processing request of type Qlik.Reporting.Engine.Messages.Requests.FormulaNodeRequest for sense app <app id>. ERROR: System.TimeoutException: Method "EvaluateEx" timed out
The reports are not being generated due to filter incompatibility.
To resolve this:
Qlik NPrinting will not generate reports when the applied filters are incompatible. It automatically checks for incompatible filters at the beginning of task execution and stops report generation if any are detected. This is a standard feature in all Qlik NPrinting releases.
Without this safeguard, QlikView or Qlik Sense would remove the incompatible selections and proceed, potentially generating a report containing all (or no) data, which may expose confidential information.
Common incompatible filters are:
Duplicate Filter Application
Applying the same filter to multiple locations (such as to both a user and a task) can result in a conflict. Depending on how the filters interact, double selections can lead to all (or no) data being shown.
Removed or Non-Existent Filter Values
If a filter references a value that no longer exists in the source document, it becomes incompatible. For instance, if a filter was set to Country = "Germany" and the latest reload of the source data removed "Germany" from the Country field, the filter will no longer work.
Contextually Conflicting Filters (No Data in Common)
A filter might work independently, but it becomes incompatible when combined with other filters. For example, a filter for Country = "Germany" may function properly on its own. However, the selection is invalidated if you also apply Month = "April" if April has no data for Germany.
This article provides an overview of how to send straight table data to email as an HTML table using Qlik Automate.
The template is available on the template picker. You can find it by navigating to Add new -> New automation -> Search templates and searching for 'Send straight table data to email as table' in the search bar, and clicking the Use template option.
You will find a version of this automation attached to this article: "Send-straight-table-data-to-email -as-HTML-table.json".
Content:
The following steps describe how to build the demo automation:
An example output of the email sent:
The information in this article is provided as-is and will be used at your discretion. Depending on the tool(s) used, customization(s)andor other factors, ongoing support on the solution below may not be provided by Qlik Support.
How to export more than 100k cells using Get Straight Table Data Block
This Techspert Talks session covers:
- What to plan for
- Migration Pathways
- Cloud Best Practices
Chapters:
Resources:
The user that runs the Qlik Sense Enterprise on Windows services is commonly referred to as the Qlik Sense Enterprise service account. The user is defined as the "Log On As" user in Windows service settings.
In a multi-node deployment it is recommended to use the same domain user for all Qlik Sense services on all nodes the same deployment.
NOTE: Qlik Sense Repository Database service is an exception, and is expected to always run as Local System.
Confirm the current Qlik Sense Enterprise service account;
Erlang/Open Telecom Platform (OTP) has disclosed a critical security vulnerability: CVE-2025-32433.
Is Qlik NPrinting affected by CVE-2025-32433?
Qlik NPrinting installs Erlang OTP as part of the RabbitMQ installation, which is essential to the correct functioning of the Qlik NPrinting services.
RabbitMQ does not use SSH, meaning the workaround documented in Unauthenticated Remote Code Execution in Erlang/OTP SSH is already applied. Consequently, Qlik NPrinting remains unaffected by CVE-2025-32433.
All future Qlik NPrinting versions from the 20th of May 2025 and onwards will include patched versions of OTP and fully address this vulnerability.
When upgrading Talend Studio to a newer patch level that is at or higher than R2024-12, those Snowflake jobs that have logging enabled may suddenly switch writing to the home directory (or users directory on Windows) instead of "/tmp" directory that the default file output.
If the home/users directory folder size has been restricted or there are other restrictions on the environment, those Snowflake jobs may fill up the home/users directory and error out outofmemory or get a Permission Denied Issue.
In order to continue collecting logs from the Snowflake JDBC driver, you should create a JSON file to configure your logging on the Jobserver/Remote Engine instance. This JSON file will tell the Snowflake Driver what logging level it should be using, where to write the log, and any additional configurations that may be required for the use case. Once created and stored in a location the Jobserver/Remote Engine can access, create an environmental variable called "SF_CLIENT_CONFIG_FILE" that points to the specific folder where the JSON file was written.
An example of a logging JSON file for Snowflake has been shared below:
########################################################### # Default Logging Configuration File # # You can use a different file by specifying a filename # with the java.util.logging.config.file system property. # For example java -Djava.util.logging.config.file=myfile ############################################################ ############################################################ # Global properties ############################################################ # "handlers" specifies a comma-separated list of log Handler # classes. These handlers will be installed during VM startup. # Note that these classes must be on the system classpath. # ConsoleHandler and FileHandler are configured here such that # the logs are dumped into both a standard error and a file. handlers = java.util.logging.ConsoleHandler, java.util.logging.FileHandler # Default global logging level. # This specifies which kinds of events are logged across # all loggers. For any given facility this global level # can be overriden by a facility specific level. # Note that the ConsoleHandler also has a separate level # setting to limit messages printed to the console. .level = INFO ############################################################ # Handler specific properties. # Describes specific configuration information for Handlers. ############################################################ # default file output is in the tmp dir java.util.logging.FileHandler.pattern = /tmp/snowflake_jdbc%u.log java.util.logging.FileHandler.limit = 5000000000000000 java.util.logging.FileHandler.count = 10 java.util.logging.FileHandler.level = INFO java.util.logging.FileHandler.formatter = net.snowflake.client.log.SFFormatter # Limit the messages that are printed on the console to INFO and above. java.util.logging.ConsoleHandler.level = INFO java.util.logging.ConsoleHandler.formatter = net.snowflake.client.log.SFFormatter # Example to customize the SimpleFormatter output format # to print one-line log message like this: # <level>: <log message> [<date/time>] # # java.util.logging.SimpleFormatter.format=%4$s: %5$s [%1$tc]%n ############################################################ # Facility specific properties. # Provides extra control for each logger. ############################################################ # Snowflake JDBC logging level. net.snowflake.level = INFO net.snowflake.handler = java.util.logging.FileHandler
The change was done by Snowflake itself on the JDBC side, which can explain the driver changes, regardless of the Studio version being used. These changes are required for all Snowflake jobs that use a JDBC driver version higher than 3.14.4, which does get upgraded in Studio (and those affiliated Snowflake Components) from 3.13.30 to 3.18.0.
This can also apply for Studio instances older than R2024-12, where users have upgraded to a newer driver (such as 3.30.0) manually.
You are experiencing a sudden failure of connections to data sources such as MySQL, Qlik Cloud, and Microsoft SQL Server in Talend Data Catalog. These connections were previously working correctly. When you test the connection, you receive the following error message:
"An error occurred in the remote service [-1,1] - MIMB execution thread ( ) was not found"
This issue can occur even if you can locally access the data sources with other tools, such as DBeaver.
The error message indicates that the Talend Data Catalog application is unable to connect to the bridge server. This is because the Remote Harvest Agent required to access your local data sources is either missing, has been deleted, or is not properly configured. The default server running in the cloud does not have access to data sources behind your firewall.
To resolve this issue, you need to install and configure a new Remote Harvest Agent. This agent can be installed on the same server as your data source (e.g., MySQL) or on another machine that has access to it.
Here are the steps to follow:
Install the Remote Harvest Agent:
Configure the New Agent in Talend Data Catalog:
Use the New Agent for Harvesting:
These instructions are the same for both the cloud and on-premise versions of Talend Data Catalog.
Additional Information
For more detailed instructions, you can refer to the following documentation:
Note: Although these links pertain to version 8.0, the process remains identical for version 8.1.
When using a Microsoft Azure ADLS as a target in a Qlik Replicate task, the Full Load data are written to CSV, TEXT, or JSON files (depending on the endpoint settings). The Full Load Files are named using incremental counters e.g. LOAD00000001.csv, LOAD00000002.csv. This is the default behavior.
In some scenarios, you may want to use the table name as the file name rather than LOAD########.
This article describes how to rename the output files from LOAD######## to <schemaName>_<tableName>__######## format while Qlik Replicate running on a Windows platform.
In this article, we will focus on cloud types of target endpoint (ADLS, S3, etc...) The example uses Microsoft Azure ADLS which locates remote cloud storage.
This customization is provided as is. Qlik Support cannot provide continued support for the solution. For assistance, reach out to Professional Services.
@Echo on
for %%a in (%1) do set "fn=%%~na"
echo %fn%
set sn=%fn:~4,8%
echo %sn%
az storage fs file move -p %1 -f johwg --new-path johwg/demo/%2.%3/%2_%3__%sn%.csv --account-name mydemoadlsgen2johwg --account-key Wbq5bFUohzfg2sPe7YW6azySm24xp4UdrTnuDSbacMi44fkn4UqawlwZCcn2vdlm/2u70al/vsWF+ASttoClUg==
where johwg is the Container Name. account-name and account-key are used to connect to ADLS storage. The values are obfuscated in the above sample.
General
Storage Type : Azure Data Lake Storage (ADLS) Gen2
Container : johwg
Target folder : /demo
Advanced
Post Upload Processing, choose "Run command after upload"
Command name : myrename3_adls.bat
Working directory: leave blank
Parameters : ${FILENAME} ${TABLE_OWNER} ${TABLE_NAME}
Qlik Replicate
Microsoft Azure ADLS target
Qlik Replicate and File target: How to rename output files LOAD######## to table name format on Windows
Qlik Replicate and File target: How to rename output files LOAD######## to table name format on Linux
The reload task fails with a message like this in the document log:
or
2017-11-10 10:16:48 0454 WHERE isnum(Sequence#)
2017-11-10 10:16:48 Error: Field 'Sequence#' not found
2017-11-10 10:16:48 Execution Failed
2017-11-10 10:16:48 Execution finished.
or
Sequence# field not found in 'lib://SHARE/Repository/Trace/SERVERNAME_Synchronization_Repository.txt'
The steps below apply where it cannot find any field. The field that cannot be found includes but is not limited to CounterName, ProxySessionID.
Environment:
QLIK-35804: Occasionally when Qlik Sense services stop, they do not fully write to the logs in the expected format.
Restart the Qlik Sense services
Modify the License and Operations Monitor apps such that it will continue parsing logs even if it fails to fully parse a particular log.
//begin ignoring errors parsing logs set errormode = 0;and
//end ignoring errors parsing logs set errormode = 1;This will look something like this:
Watch this space for when the feature has been successfully rolled out in your region.
This capability is being rolled out across regions over time:
With the introduction of shared automations, it will be possible to create, run, and manage automations in shared spaces.
Limit the execution of an automation to specific users.
Every automation has an owner. When an automation runs, it will always run using the automation connections configured by the owner. Any Qlik connectors that are used will use the owner's Qlik account. This guarantees that the execution happens as the owner intended it to happen.
The user who created the run, along with the automation's owner at run time, are both logged in the automation run history.
These are five options on how to run an automation:
Collaborate on an automation through duplication.
Automations are used to orchestrate various tasks; from Qlik use cases like reload task chaining, app versioning, or tenant management, to action-oriented use cases like updating opportunities in your CRM, managing supply chain operations, or managing warehouse inventories.
To prevent users from editing these live automations, we're putting forward a collaborate through duplication approach. This makes it impossible for non-owners to change an automation that can negatively impact operations.
When a user duplicates an existing automation, they will become the owner of the duplicate. This means the new owner's Qlik account will be used for any Qlik connectors, so they must have sufficient permissions to access the resources used by the automation. They will also need permissions to use the automation connections required in any third-party blocks.
Automations can be duplicated through the context menu:
As it is not possible to display a preview of the automation blocks before duplication, please use the automation's description to provide a clear summary of the purpose of the automation:
The Automations Activity Centers have been expanded with information about the space in which an automation lives. The Run page now also tracks which user created a run.
Note: Triggered automation runs will be displayed as if the owner created them.
The Automations view in Administration Center now includes the Space field and filter.
The Runs view in Administration Center now includes the Executed by and Space at runtime fields and filters.
The Automations view in Automations Activity Center now includes Space field and filter.
Note: Users can configure which columns are displayed here.
The Runs view in the Automations Activity Center now includes the Space at runtime, Executed by, and Owner fields and filters.
In this view, you can see all runs from automations you own as well as runs executed by other users. You can also see runs of other users's automations where you are the executor.
To see the full details of an automation run, go to Run History through the automation's context menu. This is also accessible to non-owners with sufficient permissions in the space.
The run history view will show the automation's runs across users, and the user who created the run is indicated by the Executed by field.
The metrics tab in the automations activity center has been deprecated in favor of the automations usage app which gives a more detailed view of automation consumption.
Question
When using Heruko PostgreSQL as a source integration, how to enter the Client Key which is required for the Mutual TLS (mTLS) authentication? Since the database requires mTLS to connect, is there any settings available for it in Stitch?
This feature is not supported right now and therefore there are no settings for it in Qlik Stitch. It is considered a New Feature Request.
Please find it here:
IdeaID:#492366_Stitch mTLS and Heroku integration
On the right side, click the thumbs up button icon under “Request Actions” to ensure that you let our product folks know you are interested in seeing this feature placed on the product roadmap for consideration.