Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Qlik offers a wide range of channels to assist you in troubleshooting, answering frequently asked questions, and getting in touch with our technical experts. In this article, we guide you through all available avenues to secure your best possible experience.
For details on our terms and conditions, review the Qlik Support Policy.
Index:
We're happy to help! Here's a breakdown of resources for each type of need.
| Support | Professional Services (*) | |
| Reactively fixes technical issues as well as answers narrowly defined specific questions. Handles administrative issues to keep the product up-to-date and functioning. | Proactively accelerates projects, reduces risk, and achieves optimal configurations. Delivers expert help for training, planning, implementation, and performance improvement. | |
|
|
(*) reach out to your Account Manager or Customer Success Manager
Your first line of support: https://community.qlik.com/
Looking for content? Type your question into our global search bar:
Leverage the enhanced and continuously updated Knowledge Base to find solutions to your questions and best practice guides. Bookmark this page for quick access!
Subscribe to maximize your Qlik experience!
The Support Updates Blog
The Support Updates blog delivers important and useful Qlik Support information about end-of-product support, new service releases, and general support topics. (click)
The Qlik Design Blog
The Design blog is all about product and Qlik solutions, such as scripting, data modelling, visual design, extensions, best practices, and more! (click)
The Product Innovation Blog
By reading the Product Innovation blog, you will learn about what's new across all of the products in our growing Qlik product portfolio. (click)
Q&A with Qlik
Live sessions with Qlik Experts in which we focus on your questions.
Techspert Talks
Techspert Talks is a free webinar to facilitate knowledge sharing held on a monthly basis.
Technical Adoption Workshops
Our in depth, hands-on workshops allow new Qlik Cloud Admins to build alongside Qlik Experts.
Qlik Fix
Qlik Fix is a series of short video with helpful solutions for Qlik customers and partners.
Suggest an idea, and influence the next generation of Qlik features!
Search & Submit Ideas
Ideation Guidelines
Get the full value of the community.
Register a Qlik ID:
Incidents are supported through our Chat, by clicking Chat Now on any Support Page across Qlik Community.
To raise a new issue, all you need to do is chat with us. With this, we can:
Log in to manage and track your active cases in the Case Portal. (click)
Please note: to create a new case, it is easiest to do so via our chat (see above). Our chat will log your case through a series of guided intake questions.
When creating a case, you will be prompted to enter problem type and issue level. Definitions shared below:
Select Account Related for issues with your account, licenses, downloads, or payment.
Select Product Related for technical issues with Qlik products and platforms.
If your issue is account related, you will be asked to select a Priority level:
Select Medium/Low if the system is accessible, but there are some functional limitations that are not critical in the daily operation.
Select High if there are significant impacts on normal work or performance.
Select Urgent if there are major impacts on business-critical work or performance.
If your issue is product related, you will be asked to select a Severity level:
Severity 1: Qlik production software is down or not available, but not because of scheduled maintenance and/or upgrades.
Severity 2: Major functionality is not working in accordance with the technical specifications in documentation or significant performance degradation is experienced so that critical business operations cannot be performed.
Severity 3: Any error that is not Severity 1 Error or Severity 2 Issue. For more information, visit our Qlik Support Policy.
If you require a support case escalation, you have two options:
When other Support Channels are down for maintenance, please contact us via phone for high severity production-down concerns.
A collection of useful links.
Qlik Cloud Status Page
Keep up to date with Qlik Cloud's status.
Support Policy
Review our Service Level Agreements and License Agreements.
Live Chat and Case Portal
Your one stop to contact us.
This article explains how to extract changes from a Change Store by using the Qlik Cloud Services connector in Qlik Automate and how to sync them to an Excel file.
While the example uses a Microsoft Excel file, it can easily be modified to create a CSV as well.
The article also includes:
Content
You will need the following:
Week start is included in the primary key because the purchasing process (making the changes) happens on a weekly basis.
Product Name is included in the primary key to make sure it is always returned when retrieving changes through the Get Current Changes From Change Store block in Qlik Automate.
Below is an example of the table in an app:
Optionally, you can use the app that is included in this article. Follow these steps to install the app and configure the Write Table:
Set the third one (Value) to the destinationFileName (E) variable.
Operator: equals
Search for the Right trim formula.
Configure the Character to trim parameter to a single comma.
Type a single square bracket after the field mapping in the Rows input field:
The automation is now configured and can be run manually. But ideally, a user can run it from within the Qlik Sense app whenever they are finished with creating orders through the Write Table.
This article will only cover the button’s configuration in a sheet. A step-by-step guide on configuring the button object to run automations is available in How to run an automation with custom parameters through the Qlik Sense button.
The Copy File block will fail when there already exists an Excel file with the same name. Depending on the use case, that might be the right behavior or you might want to overwrite the file.
The overwrite process explained below will delete the existing file and then create a new file.
Add a Condition block to the automation and configure it to evaluate the output from the Check If File Exists block.
This block will return a Boolean (true or false) result. If it is true, the file exists.
Configure the Condition block to evaluate that output using the Boolean 'is true' operator:
Qlik Automate can also be used to share the purchase order with your purchasing team. This can be built in the same automation or in a separate automation. Below are the steps to add this to the same automation.
Tip! Update the button label to make it clear to users of your app that clicking it will also send the purchase order.
As an alternative, it is also possible to add these blocks to a new automation that is triggered from a second button.
This template was updated on December 4th, 2025 to replace the original installer and API key rotator with a new, unified deployer automation. Please disable or delete any existing installers, and create a new automation, picking the Qlik Cloud monitoring app deployer template from the App installers category.
Installing, upgrading, and managing the Qlik Cloud Monitoring Apps has just gotten a whole lot easier! With a single Qlik Automate template, you can now install and update the apps on a schedule with a set-and-forget installer using an out-of-the-box Qlik Automate template. It can also handle API key rotation required for the data connection, ensuring the data connection is always operational.
Some monitoring apps are designed for specific Qlik Cloud subscription types. Refer to the compatibility matrix within the Qlik Cloud Monitoring Apps repository.
This automation template is a set-and-forget template for managing the Qlik Cloud Monitoring Applications, including but not limited to the App Analyzer, Entitlement Analyzer, Reload Analyzer, and Access Evaluator applications. Leverage this automation template to quickly and easily install and update these or a subset of these applications with all their dependencies. The applications themselves are community-supported; and, they are provided through Qlik's Open-Source Software (OSS) GitHub and thus are subject to Qlik's open-source guidelines and policies.
For more information, refer to the GitHub repository.
Update just the configuration area to define how the automation runs, then test run, and set it on a weekly or monthly schedule as desired.
Configure the run mode of the template using 7 variable blocks
Users should review the following variables:
If the monitoring applications have been installed manually (i.e., not through this automation), then they will not be detected as existing. The automation will install new copies side-by-side. Any subsequent executions of the automation will detect the newly installed monitoring applications and check their versions, etc. This is due to the fact that the applications are tagged with "QCMA - {appName}" and "QCMA - {version}" during the installation process through the automation. Manually installed applications will not have these tags and therefore will not be detected.
Q: Can I re-run the installer to check if any of the monitoring applications are able to be upgraded to a later version?
A: Yes. The automation will update any managed apps that don't match the repository's manifest version.
Q: What if multiple people install monitoring applications in different spaces?
A: The template scopes the application's installation process to a managed space. It will scope the API key name to `QCMA – {spaceId}` of that managed space. This allows the template to install/update the monitoring applications across spaces and across users. If one user installs an application to “Space A” and then another user installs a different monitoring application to “Space A”, the template will see that a data connection and associated API key (in this case from another user) exists for that space already. It will install the application leveraging those pre-existing assets.
Q: What if a new monitoring application is released? Will the template provide the ability to install that application as well?
A: Yes, but an update of the template from the template picker will be required, since the applications are hard coded into the template. The automation will begin to fail with a notification an update is needed once a new version is available.
Q:I have updated my application, but I noticed that it did not preserve the history. Why is that?
A: Each upgrade may generate a new set of QVDs if the data models for the applications have changed due to bug fixes, updates, new features, etc. The history is preserved in the prior versions of the application’s QVDs, so the data is never deleted and can be loaded into the older version.
A replication task fails with a start_job_timeout error, and the task logs showthe following messages:
[SOURCE_UNLOAD ]E: An FATAL_ERROR error occurred unloading dataset: .0FI_ACDOCA_20 (custom_endpoint_util.c:1155)
[SOURCE_UNLOAD ]E: Timout: exceeded the Start Job Timeout limit of 2400 sec. [1024720] (custom_endpoint_unload.c:258)
[SOURCE_UNLOAD ]E: Failed during unload [1024720] (custom_endpoint_unload.c:442)
We recommend running the extractor directly in RSA3 in SAP to measure how long it takes to start.
Based on the measured time, adjust the value of the internal parameter start_job_timeout.
The value should be at least 20% higher than the time SAP takes to start.
Reviewing the endpoint server logs (/data/endpoint_server/data/logs directory) reveals that the job timeout is configured as 2400 seconds: [sourceunload ] [INFO ] [] start_job_timeout=2400
The error occurrs because the job did not start within the configured timeout.
When the task attempted to start the SAP Extractor, SAP did not return the “start data extraction” response within 2400 seconds (40 minutes), causing the timeout.
This may happen for extractors with large datasets, such as 0FI_ACDOCA_20, where the initialization on the SAP side will take a long while.
The following issue is observed in a replication from a SAP Hana source to a Snowflake target:
In the source table, all columns are defined as NOT NULL with default values.
However, in the replication project, specifically during Change Data Capture, Null Values are sent to the CT table created as part of Store changes. This is observed when Delete Operations are performed in the source.
In this example, the Register task of the Pipeline Project reads data from the Replication Task Target [The data available in Snowflake storage]. When the Storage task is run, the task fails with NULL result in a non-nullable column.
When a DELETE operation is performed in SAP HANA, it removes the entire row from the table and stores only the Primary Key values in the transaction logs.
Operation type = DELETE
Default values are not available and not applied.
As a result, we can only see values for the primary key columns, and the remaining columns contain the null value in the Snowflake Target (__ct table).
To overcome this issue, please try the following workaround:
In the Replicate Project, apply a Global Rule Transformation to handle Null Value being populated in Snowflake.
This is done through Add Transformation > Replace Column Value
In the Transformation scope step:
Prepare and run the job
Go to Snowflake and check the __CT table entry to verify that there are no more null values for non-primary key columns
In the Pipeline Project, use the Register task to load data from the Replication Task
A Qlik Replicate task using the SAP OData source endpoint fails with the error:
Error: Http Connection failed with status 500 Internal Server Error
Change the SAP OData endpoint by setting Max records per request (records) to 25000.
SUPPORT-7127
The Legacy Support portal has been discontinued as per Decommissioning the legacy support portal (support.qlik.com) January 23rd, 2026 .
To get an overview of your legacy serial number, contact Qlik Support by starting a chat.
Qlik Talend Data Stewardship R2025-02 keeps on loading and does not open up in Talend Management Console.
Apply the latest Patch_20260105_TPS-6013_v2-8.0.1-.zip or latter version of patch
##sysctl
sudo vi /etc/sysctl.conf
#add the following lines
net.ipv4.tcp_keepalive_time=200
net.ipv4.tcp_keepalive_intvl=75
net.ipv4.tcp_keepalive_probes=5
net.ipv4.tcp_retries2=5
sudo sysctl -p #activate
temp change without rebooting :
sudo ip link set dev eth0 mtu 1280
Persist on os:
sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0
MTU=1280
network_mtu.html | docs.aws.amazon.com
These sysctl settings are primarily used to make your server more aggressive at detecting and closing "dead" or "hung" network connections. By default, Linux settings are very conservative, which can lead to resources being tied up by connections that are no longer active.
Here is a breakdown of what these specific changes do and why they are beneficial.
The first three parameters control how the system checks if a connection is still alive when no data is being sent (the "idle" state).
| net.ipv4.tcp_keepalive_time=200 | This triggers the first "keepalive" probe after 200 seconds of inactivity. The Linux default is 7,200 seconds (2 hours). |
| net.ipv4.tcp_keepalive_intvl=75 | Once probing starts, this sends subsequent probes every 75 seconds. The default is 75 seconds. |
| net.ipv4.tcp_keepalive_probes=5 | This determines how many probes to send before giving up and closing the connection. The default is 9. |
The Benefit: In a standard Linux setup, it can take over 2 hours to realize a peer has crashed. With your settings, a dead connection will be detected and cleared in roughly 20 minutes (200 + (75* 5) = 575 seconds). This prevents "ghost" connections from filling up your connection tables and wasting memory.
| net.ipv4.tcp_retries2=5 | This controls how many times the system retransmits a data packet that hasn't been acknowledged before killing the connection. |
The Benefit: The default value is usually 15, which can lead to a connection hanging for 13 to 30 minutes during a network partition or server failure because the "backoff" timer doubles with each retry. By dropping this to 5, the connection will "fail fast" (usually within a few minutes).
This is excellent for high-availability systems where you want the application to realize there is a network issue quickly so it can failover to a backup or return an error to the user immediately rather than leaving them in a loading state.
| Summary Table | Default | Your Values |
| Parameter | Default (Approx) | Your Value (Impact) |
| Detection Start | ~2 Hours | ~3.3 Minutes ( Much faster initial check) |
| Total Cleanup Time | ~2.2 Hours | ~20 Minutes (Frees up resources significantly faster) |
| Data Timeout | ~15+ Minutes | ~2-3 Minutes(Stops "hanging" on broken paths) |
Microservices: To ensure fast failover and prevent a "cascade" of waiting services in a distributed system.
If These changes are not permanent until you add them to /etc/sysctl.conf. Running the command with -w only applies them until the next reboot.
There are 2 major factors contributing to this issue
ERROR [http-nio-19999-exec-2] g.c.s.Oauth2RestClientRequestInterceptor : #1# Message: '[invalid_grant] ', CauseMessage: '[invalid_grant] ', LocalizedMessage: '[invalid_grant] '
To point Qlik Replicate at an AG Secondary Replicate instead of the primary, follow these steps;
Limitations include:
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
If you want to submit a new idea or improvement request for Qlik's products, our Ideas section in the Qlik Community is open to all registered customers. The Ideation portal covers ideas and suggestions for all Qlik Products, such as Qlik Cloud Analytics, Qlik Sense Enterprise on Windows, Qlik Talend, Qlik Stitch, and more.
Content:
Ideas are qualified and prioritized based on their value to improving and enhancing the Qlik product. For a successful request, please consider including a strong business case on why you think the change would be beneficial, including:
A valid Qlik ID (log in to the Community) is required.
Before you get started, review Ideation Guidelines: How to Submit an Idea for submission guidelines and eligibility information.
A valid Qlik ID (log in to the Community) is required.
Ideas and Improvement requests are reviewed by our product teams. While we strive to provide answers, we cannot guarantee a specific response time.
Ideation on Community
Ideation Platform Updates
Ideation Guidelines: How to Submit an Idea
When you run a Job from Talend Studio (any version) to a remote engine (all versions), the metric (flow) information, like n Row(s), does not transfer back to Talend Studio.
The remote run/debug from Studio feature uses direct TCP communication on JobServer ports. If you do not open port 8558, the information will not transfer back to Talend Studio.
To check whether port 8558 is open, look in the <RemoteEngineInstallationDirectory>/etc/org.talend.remote.jobserver.server.cfg configuration file.
# used to launch talend commands org.talend.remote.jobserver.server.TalendJobServer.COMMAND_SERVER_PORT=8003 # used for the file transfer org.talend.remote.jobserver.server.TalendJobServer.FILE_SERVER_PORT=8004 # used for monitoring the servers state org.talend.remote.jobserver.server.TalendJobServer.MONITORING_PORT=8891 # used for the execution of process messages publisher, 8558 by default org.talend.remote.jobserver.server.TalendJobServer.PROCESS_MESSAGE_PORT=8558
Ensure port 8558 is open. This transfers the state and trace of the run from the Talend Remote Engine for Jobs being executed remotely.
Security of Qlik Sense Enterprise on Windows can be approached in the below discrete areas. All these areas provide different options for increasing security in a deployment, and thereby mitigating vulnerabilities and protecting against attackers.
Content:
Be aware that a high level of server hardening can lead to failure in your deployment. Be mindful of always having a backup to restore to in case your configuration leads to irreversible failure.
Qlik Sense Enterprise on Windows supports multiple different Authentication Solutions;
Qlik can not specify which authentication method is appropriate for each deployment. It is advisable to review currently supported alternatives within your organization and/or Identity Provider (IdP) to implement the most suitable solution for your use case.
Qlik Sense Enterprise on Windows provides two levels of native authorization in the product.
Attribute based access control (ABAC), which is configured through Qlik Sense security rules. This article will not go in depth on how to best implement security rules for your requirements, but it is highly recommended to think of your users based on the capabilities that you intend to provide them. For example different roles and capabilities as shown in image below, allows for a security rule framework to be designed and implemented. This can be done either by yourself by referencing Qlik Sense Help for Administrators and available assets or by engaging with a Qlik Consultant or Qlik Partner.
Row level data reduction, which is configured through Section Access at Qlik Sense app level. This article will not go in depth on Section Access implementation, but with this reduction a single file can be used to hold the data for a number of users or user groups. Qlik Sense uses the information in the section access for authentication and authorization, and dynamically reduces the data, so that users only see their own data.
Qlik Sense Enterprise on Windows inherits the available protocols, cipher suites, key exchanges and other security hardening which are enabled on the Windows Server operating Qlik Sense.
Windows Server has a lot of protocols enabled by default; however protocols, ciphers, hashes and key exchanges that are considered deprecated or not secure enough should be disabled. There are many ways of doing this, and the Windows administrator and security experts should be consulted so that local policies are accurately applied. For simplicity, understanding and a good overview IIS Crypto 3.0 can be a good tool for evaluating current Windows configuration and applying changes.
Keep in mind that "Best Practice" today might not be recommended in the near future, what was considered "safe" a while ago is not necessarily considered so today. For this reason, it is also important to regularly scan servers for potential vulnerabilities and revisit configurations as required.
The Windows Server needs to be restarted for these settings changes to take effect. It is also important to ensure that all components running on the server still operate as expected after hardening is applied, for example, older non-Qlik software might not be compliant with the latest options and standards.
Firewalls typically should be closed, with required ports only opened for intended purposes.
See Qlik Sense Enterprise on Windows ports overview for details on required port based on the deployed architecture.
For most organizations, local administrator rights allow for an easier deployment, but Qlik Sense Enterprise on Windows does not require local administrator rights in order to function. This can be an attractive option inside some organizations. This will require additional configuration of boot strap mode as described in Qlik Sense Enterprise on Windows Services.
For a brief overview of the rights needed by a Qlik Sense Enterprise service account:
Qlik Sense Enterprise for Windows does not officially support Group Managed Service Accounts (gMSA), but it can operate using one. The initial barrier is that the installer requires a service account and password to be entered during installation. A domain or local account could be substituted for the install stages only to be swapped out in the Windows Services applet (services.msc) after installation. Some functionality may require workarounds (e.g. A User Directory Connection to Active Directory).
Qlik Sense Enterprise on Windows does require exceptions from anti-virus scan to avoid potential disk I/O conflicts. Refer to Qlik Sense Folder And Files To Exclude From AntiVirus Scanning for more details.
Qlik Sense Enterprise on Windows can run with Federal Information Processing Standards (FIPS) enabled on the Windows Server. This does require a few adjustments of configuration files due to Qlik using non-FIPS compliant algorithms for minor tasks like hash checks. See Running Qlik Sense on Windows systems with FIPS compliance enabled for more details on Qlik Sense and FIPS.
Qlik Sense Enterprise on Windows uses PostgreSQL to store meta-data relating to a Qlik Sense site. In multi-node sites or sites where PostgreSQL is isolated from Qlik Sense Enterprise for Windows additional security can be applied;
Qlik Sense Proxy service bundled with Qlik Sense Enterprise on Windows is simply a web-service. This means applying general practice guidance but in the context of Qlik Sense as described below.
Qlik Sense Enterprise on Windows acts as a Certificate Authority (CA) during initial installation and signs a certificate that is applied on all encrypted traffic between Qlik Sense services. The same Qlik Sense signed certificate is applied as default certificate also for incoming connections from users accessing Qlik Sense Hub and QMC. This default certificate is not intended for production use, unless user access to Qlik Sense comes through a network load balancer or reverse proxy that trusts the Qlik Sense certificate. For direct user access to Qlik Sense Proxy, a fully trusted certificate can typically be acquired from your local IT and then applied on the Qlik Sense Proxy service.
As of July 2019, Qlik Sense Enterprise on Windows support SHA1 and SHA2 certificates. If SHA384 or SHA512 certificates are needed, then a network load balancer or reverse proxy can be configured in front of Qlik Sense which offloads to Qlik Sense.
There are numerous HTTP response headers that can be used in attempting to secure a server. Below are a couple of the most common ones, but as always it is recommended to consult local IT and web security expert on what the recommendations are.
Any additional HTTP response header values can be configured in Qlik Sense Virtual Proxy settings under Additional response headers as shown in the below image and described in Qlik Sense for Administrators: Virtual Proxies. It is recommended to trial any header changes in a new virtual proxy, as poor configuration may accidentally lock you out from Qlik Sense access.
Policy is a placeholder for your policy of choice and cannot be used as a value. See Writing a Policy (Mozilla) for examples.
The following error (C) is shown after successfully creating a Jira Connection string and selecting a Project/key (B) from select data to load (A) :
Failed on attempt 1 to GET. (The remote server returned an error; (404).)
The error occurs when connecting to JIRA server, but not to JIRA Cloud.
Tick the Use legacy search API checkbox. This is switched off by default.
This option is available since:
This is required when using both Issues and CustomFieldsForIssues tables.
Connections to JIRA Server use the legacy API.
SUPPORT-3600 and SUPPORT-7241
GOAWAY is received regularly in the karaf.log.
Full error example:
| WARN | pool-31-thread-1 | PhaseInterceptorChain | 165 - org.apache.cxf.cxf-core - 3.6.2 | | Interceptor for {http://pairing.rt.ipaas.talend.org/}PairingService has thrown exception, unwinding now
Caused by: javax.ws.rs.ProcessingException: Problem with writing the data, class org.talend.ipaas.rt.engine.model.HeartbeatInfo, ContentType: application/json
at org.apache.cxf.jaxrs.client.AbstractClient.reportMessageHandlerProblem(AbstractClient.java:853) ~[bundleFile:3.6.2]
... 17 more
Caused by: java.io.IOException: IOException invoking https://pair.eu.cloud.talend.com/v2/engine/3787d7b2-2da7-43be-b6fc-39adab83d51a/heartbeat: GOAWAY receive
The issue has been mitigated in version 2.13.11 of the Remote Engine. See Talend Remote Engine v2.13.11:
| TMC-5486 | The engine has been enhanced to retry heartbeat connections when receiving a GOAWAY message. This improves its overall stability. |
To mitigate the issue, use Java 17 Update 17, released on the 21st of October 2025.
This update includes the fix for the bug JDK-8301255.
For example: https://www.azul.com/downloads/?version=java-17-lts&package=jdk#zulu
This is caused by a defect in JDK 17 (JDK-8301255). See Http2Connection may send too many GOAWAY frames.
This article outlines all features available in the Qlik Customer Portal's My Account section, where you can view your personal information, team members, trials, and contacts at Qlik.
The Customer Support Portal comes with four roles:
| Feature | Customer Admin | Business Admin | Support Admin | Default User |
| Personal Information | Yes | Yes | Yes | Yes |
| Trials | Yes | Yes | Yes | Yes |
| My Team | Yes | Yes | Yes | Yes |
| Qlik Contacts | Yes | Yes | Yes | No |
The Qlik Customer Portal Account section lets you access your Personal Information, your Trials, Team members, and Qlik Contacts.
Head there to view your personal profile details and to review your assigned roles.
The Role field determines access to other features in the Customer Portal. To change your role, find a colleague with the role Customer Admin in the My Team section. They can change your role if needed.
See active trials linked to your account and track evaluation periods.
This is where you can view and manage your colleagues registered in the Qlik Customer Portal, including their assigned roles.
A Customer Admin can update role assignments:
This is where you can view all your contacts with Qlik, such as:
The following error is displayed if you do not have the correct role assigned to access a specific feature:
You are not authorized to view this detail. Please contact a colleague with a Customer Admin "Role" (see "My Team" tab using your company's internal channels. Communication is not possible within the Qlik Customer Portal environment.
Speak to a team member with the Customer Admin role to have your access updated.
After distributing the Consumption Report app from Qlik Cloud Administration > Settings, scheduled reloads of the app fail with the following error:
Error: $(MUST_INCLUDE= [lib://snowflake_external_share:DataFiles/Capacity_Usage_Script_PROD.txt] cannot access the local file system in current script mode. Try including with LIB path.
The Consumption Report app isn't meant to be reloaded. The app should be distributed from Qlik Cloud Administration > Settings each day. Refer to Distributing detailed consumption reports for details:
Redistribute the app to obtain the most recent data. Apps stored on your tenant exist as separate instances and are not replaced by newer ones.
On the Talend side, refer to Distributing Data Capacity Reporting App for Talend Management Console for details on how to set up capacity reporting.
To automatically redistribute the app, see Automate deployment of the Capacity consumption app with Qlik Automate.
The Report Consumption app is meant to be distributed from Qlik Cloud Administration > Settings and not updated by a scheduled reload of the app.
SAP Hana used as a source endpoint in Qlik Replicate may encounter errors processing null values for the SAP Hana ‘Application User’ field:
[SOURCE_CAPTURE ]E: Bad event rowid and operation [1020454] (saphana_trigger_based_cdc_log.c:534)
[SOURCE_CAPTURE ]E: Error handling events for db table id 525063, in the interval (552426485 - 552426767) [1020454] (saphana_trigger_based_cdc_log.c:1725)
This behavior will be fixed in the Qlik Replicate May 2025 SP04 release. If you need access to a patch before the expected release, contact Qlik Support.
Once fixed, the endpoint identifies the ‘null’ values and handles the data records with a missing “Application User” value.
Functionality added to Qlik Replicate 2025.5.0 captures the ‘Application User' to populate the USER_ID information in the data records. In particular situations, SAP HANA does not provide an ‘Application User’ value and instead provides a 'null’ value, which causes inconsistency in the fetch data parsing and therefore throws an error as a result of the missing value.
SUPPORT-5926
Executing tasks or modifying tasks (changing owner, renaming an app) in the Qlik Sense Management Console and refreshing the page does not update the correct task status. Issue affects Content Admin and Deployment Admin roles.
The behaviour began after an upgrade of Qlik Sense Enterprise on Windows.
This issue can be mitigated beginning with August 2021 by enabling the QMCCachingSupport Security Rule.
Enable QmcTaskTableCacheDisabled.
To do so:
Upgrade to the latest Service Release and disable the caching functionality:
To do so:
NOTE: Make sure to use lower case when setting values to true or false as capabilities.json file is case sensitive.
Should the issue persist after applying the workaround/fix, contact Qlik Support.
Qlik Talend Data Integration task was unable to resume with the following error:
[SORTER_STORAGE ]E: The Transaction Storage Swap cannot write Event (transaction_storage.c:3321) [DATA_STRUCTURE ]E: SQLite general error. Code <14>, Message <unable to open database file>. [1000505] (at_sqlite.c:525) [DATA_STRUCTURE ]E: SQLite general error. Code <14>, Message <unable to open database file>. [1000506] (at_sqlite.c:475)
Freeing up disk space or increase the data directory size usually solves the issue.
It indicates that Qlik Talend Data Integration was no longer able to access its internal SQLite database (used for the sorter, metadata, and task state management).
When the disk space in the Sorter directory becomes full, SQLite can no longer write to the database, which invariably results in Code 14.
After checking the disk space on the Linux server, it was found that it had indeed reached 100% usage, leaving no free space available which caused the issue.