Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Updated 4th of February, 2026: the role and keys toggle has been removed as announced.
The following two items were deprecated in June 2025 and removed in February 2026:
This can lead to Error: 401 Authorization Required when executing third party API calls.
To replace the deprecated built-in role, migrate your users away from the Developer role to a Custom Role with the required permissions (Manage API Keys).
To create and assign a replacement custom role:
For additional reading on the Managed API Keys (set to Not allowed by default), see Permissions in User Default and custom roles | Permission settings — Features and actions.
The Developer role and Enable API keys toggle were removed in February 2026.
Once the Developer role has been removed, users who have not been updated to use the “Manage API keys” = Allow permission will:
API keys are not deleted from Qlik Cloud and will automatically be re-enabled once a user has been assigned the required Manage API Keys permissions.
To resolve this, a Tenant Administrator needs to act as outlined in What action do I need to take?
The deprecation notice was communicated in an Administration announcement and documented on our What's New in Qlik Cloud feed. See Developer role and API key toggle deprecated | 6/16/2025 for details.
The following products were affacted:
When using IBM DB2 for iSeries as a source in Qlik Replicate, the task may report a warning if journal receiver numbers are not continuous.
A typical warning message looks like:
[SOURCE_CAPTURE ]W: Journal entry sequence '2026' was read from journal receiver 'APSUPDB.QSQJRN0118'. The previous entry was read from receiver 'APSUPDB.QSQJRN0116'. Check if a receiver has been detached. (db2i_endpoint_capture.c:1836)
Qlik Replicate reports this condition as a warning only. There is no impact on task execution or data integrity:
This warning can be safely ignored unless accompanied by other errors or abnormal task behavior.
On the IBM DB2 for iSeries side, 'Check if a receiver has been detached' can occur if, for example, the process is holding or locking the journal. This temporarily prevents the system from creating or attaching the next journal receiver. In such cases, a receiver number may be allocated but never successfully created, resulting in a gap in the receiver numbering.
This behavior is normal on IBM i and does not indicate a defect. The system assigns journal receiver numbers, but sequential continuity is not guaranteed. IBM i only guarantees that receiver numbers increase monotonically, not that every number will exist.
00420963, 00423959
At Qlik Connect 2026 I hosted a session called "Fast 15" with my top 10 visualization tips. Here's the app I used with all tips including test data.
Tip titles, more details in app:
I want to emphasize that many of the tips were discovered by others than me, I tried to credit the original author at all places when possible.
If you liked it, here's more in the same style:
Thanks,
Patric
This article describes the diagnosis and resolution of a Qlik Data Gateway (repagent) service failure when installed in a non-default mount point (default is /opt and we installed using prefix keyword):
QLIK_CUSTOMER_AGREEMENT_ACCEPT=yes rpm -ivh qlik-data-gateway-data-movement.rpm --prefix /data
The service entered a failed state after exhausting its systemd restart limit, caused by a stale process and PID file left over from a previous crash. This article covers root cause analysis, step-by-step resolution, and preventative measures.
The following symptoms were observed when the issue was reported:
From the systemctl status output:
Active: failed (Result: exit-code) since Wed 2026-04-15 21:16:53 BST
Main PID: 3220 (code=exited, status=1/FAILURE)
repagent.service: Start request repeated too quickly.
To restore the service, first verify the SELinux config is set to SELINUX=disabled and SELINUXTYPE=minimum:
vi /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=minimum
If not, make the changes as per above and reboot the box. Then continue using a root user:
Confirmed resolution output:
Active: active (running) since the current date
Main PID: 10391 (agentctl)
Tasks: 93
There are two root causes.
When the repagent service originally crashed, it left behind an orphan process (PID 4082) that continued running independently of systemd. When systemd attempted to restart the service, it detected the stale PID file pointing to PID 4082, which it refused to adopt because it was not owned by root. This caused every restart attempt to fail with a 'protocol' error.
|
Step |
Event |
Result |
|
1 |
repagent crashed |
PID 1234 left as orphan |
|
2 |
Systemd attempted restart |
Detected stale PID 1234 |
|
3 |
Systemd refused to adopt PID |
Failed with 'protocol' error |
|
4 |
Restart counter hit limit (5) |
Service permanently stopped |
The journal also showed a secondary warning related to the System.Data.SQLite native library pre-loader failing to check the code base for assemblies loaded from a single-file bundle. This is a known .NET runtime compatibility warning and does not directly cause the service failure, but may indicate a .NET version mismatch worth monitoring.
System.Data.SQLite: Native library pre-loader failed to check code base
System.NotSupportedException: CodeBase is not supported on assemblies
loaded from a single-file bundle.
After updating the license, connecting to the Talend Administration Center fails with:
You are using # DI users, but your license allows only #, please contact your account manager.
Two solutions exist.
License downgrade behavior: When downgrading licenses (for example, from DQ seats to DI seats), user configurations must also be updated.
- Users previously assigned as DQ must be manually changed to DI
- Failure to update the user type may cause access issues or license mismatches
The issue occurs due to a mismatch between the number or type of users defined in the system and those allowed by the updated license.
In Qlik Talend Administration Center, users are assigned different license seat types depending on their access rights and functional domain.
The main user/license types are:
For more information regarding license types and features for users, see What domains can you work in depending on your user type and license.
A reload task in QlikView fails with the following error in the task log:
Error The sourcedocument failed to save.. Exception=System.Runtime.InteropServices.COMException (0x80004005): Error HRESULT E_FAIL has been returned from a call to a COM component. || at QlikView.Doc.SaveAs(String _FileName, Int16 _Format) || at QVBWrapper.Document.Save(ILogBucket i_LogBucket, String i_SaveAsFileName)
This may be caused by an active virus scan locking the relevant files, or an invalid alert configured on the task.
Ensure that QlikView file locations (.qvw, qvd, etc..) are excluded from active virus scans. See QlikView Folder And Files To Exclude From Anti-Virus Scanning .
Remove or edit the alerts configured on the document.
Alerts can be configured in the Distribution task. To verify:
For additional troubleshooting steps and root causes, see QlikView: Checklist For Reload/Publisher Task Failure.
After completing the initial database configuration steps, the go to db config page link is removed from the Qlik Talend Administration Login page. See Updating parameters after the configuration is finalized for details.
Can the steps be repeated and the link re-enabled?
The Database configuration parameter named configuration.dbadmin.enable is set to false once you finalize the configuration on the Qlik Talend Administration login page.
To update the parameters configured on the Configuration and Database configuration pages, you must edit the configuration.properties file in the <TalendAdministrationCenterInstallationDirectory>/apache-tomcat/webapps/org.talend.administrator/WEB-INF/classes folder:
Log in; the link will be accessible again
The App capability enabling Insight Advisor in hub is disabled without user action.
The issue is caused by SUPPORT-7049, which has been resolved in the following versions and any subsequent releases:
Upgrade to the latest available version and then manually reenable the Insight Advisor:
SUPPORT-7049
Importing a Qlik Replicate task with a name longer than 32 characters leads to the import failing with the following error:
SYS-E-HTTPFAIL, Failed to import task
Qlik Replicate has a task name limit of 32 characters, as documented in Qlik Replicate | Adding tasks. This validation can be overridden when importing a task starting with Qlik Replicate version 2025.11 SP04.
To override the validation, upgrade to 2025.11 SP04 or later and add the enforceTaskSettingsValidation feature flag:
Qlik Replicate can now import tasks with names longer than 32 characters.
The Qlik Sense on Windows Content Monitor is intended for Qlik Administrators. Its purpose is to monitor and analyze your Qlik Sense content, including app usage, resource consumption, and data sources. This helps with governance, optimization, and identifying unused content.
All technical details can be found in the two attached documents. These are your primary resources.
What it covers: A detailed, sheet-by-sheet explanation of the entire app. It describes what every KPI, chart, and table means for sections like "Weekly Summary," "Snapshot," "Applications," "Sessions," "Task Executions," "File Inventory," and "Infrastructure."
Use Case:
Guiding a customer on how to read and interpret the data.
Answering customer questions like, "What does the 'Session Concurrency' sheet show?" or "How do I read the 'File Inventory' sheet?"
What it covers: This is the primary guide for setup and reload issues. It contains:
Detailed definitions for all script parameters (e.g., vCentralNodeHostName, vVirtualProxyPrefix, vServerLogFolder).
Performance tuning options (e.g., vFileScanMaxDuration, vAppRetrievalLoop, exclusion lists).
A "Trial Mode" section is used for troubleshooting initial reload failures.
A "Troubleshooting" section.
Use Case:
New installations.
Troubleshootings.
Tuning performance for long reloads.
See the attached Qlik Sense Content Monitor Configuration Guide
This article provides answers to the most frequent questions asked about Qlik MCP.
For the more general Qlik Answers FAQ, see Qlik Answers Agentic Analytics FAQ.
Qlik Model Context Protocol (MCP) server integrates Qlik Cloud into your LLM workflow, allowing you to work with Qlik Cloud using your LLM without having to leave your LLM. Connection issues will often be tied to misconfiguration.
Qlik MCP does not support clients with Client Secrets.
In a case where you do not get the response you expect based on the sources, or you receive an error:
Has your app been prepared for Qlik Answers?
For now, Qlik MCP will continue to be priced based on current models for the number of questions asked. You get capacity at corresponding levels in Standard, Premium, and Enterprise editions, as well as Qlik Sense Enterprise SaaS. There is currently no additional cost for structured data questions or task automation requests; a question is a question.
Use of the MCP server consumes questions when Qlik is accessed using Tool Calls. A Tool Call is a request made by the LLM to interact with Qlik's capabilities, such as, but not limited to, querying databases, calling APIs, or performing computations. These are typically visible in the LLM's log.
For Qlik's MCP server, 5 Tool calls consume 1 question. More questions may be purchased for expanded use cases.
See Pricing and the Qlik MCP server product description for details.
Qlik’s pricing does not include your chosen LLM subscription or usage, which will need to be paid separately.
Yes. Qlik MCP works on top of existing Qlik Sense applications and uses the same data, logic, and security model.
But to get the best experience, apps should be prepared beforehand:
Your Qlik Cloud subscription determines the quota of questions asked by users. If you are licensed for Qlik Answers, both MCP and Qlik Answers will use your monthly question capacity. See Administering Qlik MCP server.
Question capacity quotes are per month and reset every month. When you hit your limit, users can no longer ask questions until the next month. Overage is only allowed, depending on your subscription. For more information, see Qlik MCP server product description.
For more information on overage, see Overage.
Features can be turned off for individual users through user scopes.
A Loop and Reduce task created in the Qlik Sense Application Management Console (AMC) will only display its settings and parameters to the original task creator. Other users are unable to view any details.
For more information about AMC, see: AMC - Application Management Console, an alternative to the QMC for large Enterprise environments.
Example:
User A, who created the task, sees:
User B only sees the following:
Create a new Security Rule in the Qlik Sense Management Console to allow the desired user(s) to see all content. For more information about Security Rules, see Security rules.
Example Security Rule:
Content
The information in this article and video is provided as is. If you need assistance with Zabbix, please engage with Zabbix directly.
The environment being demonstrated in this article consists of one Central Node and Two Worker Nodes. Worker 1 is a Consumption node where both Development and Production apps are allowed. Worker 2 is a dedicated Scheduler Worker node where all reloads will be directed. Central Node is acting as a Scheduler Manager.
The Zabbix Monitoring appliance can be downloaded and configured in a number of ways, including direct install on a Linux server, OVF templates and self-hosting via Docker or Kubernetes. In this example we will be using Docker. We assume you have a working docker engine running on a server or your local machine. Docker Desktop is a great way to experiment with these images and evaluate whether Zabbix fits in your organisation.
This will include all necessary files to get started, including docker compose stack definitions supporting different base images, features and databases, such as MySQL or PostgreSQL. In our example, we will invoke one of the existing Docker compose files which will use PostgreSQL as our database engine.
Source: https://www.zabbix.com/documentation/current/en/manual/installation/containers#docker-compose
git clone https://github.com/zabbix/zabbix-docker.git
Here you can modify environment variables as needed, to change things like the Stack / Composition name, default ports and many other settings supported by Zabbix.
cd ./zabbix-docker/env_vars
ls -la #to list all hidden files (.dotfiles)
nano .env_web
In this file, we will change the value for ZBX_SERVER_NAME to something else, like "Qlik STT - Monitoring". Save the changes and we are ready to start up Zabbix Server.
./zabbix-docker folder contains many different docker compose templates, either using public images or locally built (latest and local tags).
You can run your chosen base image and database version with:
docker compose -f compose-file.yaml up -d && docker compose logs -f --since 1m
Or unlink and re-create the symbolic link to compose.yaml, which enables managing the stack without specifying a compose file. Run the following commands inside the zabbix-docker folder to use the latest Ubuntu-based image with PostgreSQL database:
unlink compose.yamlln -s ./docker-compose_v3_ubuntu_pgsql_latest.yaml compose.yamldocker compose up -dIf you skip the -d flag, the Docker stack will start and your command line will be connected to the log output for all containers. The stack will stop if you exit this mode with CTRL+C or by closing the terminal session. Detached mode will run the stack in background. You can still connect to the live log output, pull logs from history, manage the stack state or tear it down using docker compose down.
Pro tip: you will be using docker compose commands often when working with Docker. You can create an alias in most shells to a short-hand, such as "dc = docker compose". This will still accept all following verbs, such as start|stop|restart|up|down|logs and all following flags. docker compose up -d && docker compose logs -f --since 1m would become dc up -d && dc logs -f --since 1m.
Use the IP address of your Docker host: http://IPADDRESS or https://IPADDRESS.
The Zabbix server stack can be hosted behind a Reverse Proxy.
The default username is Admin and the default password is zabbix. They are case sensitive.
Download link: https://www.zabbix.com/download_agents, in this case download the Windows installer MSI.
After Agent is installed, in Zabbix go to Data Collection > Hosts and click on Create host in the top right-hand corner. Provide details like hostname and port to connect to the Agent, a display name and adjust any other parameters. You can join clusters with Host groups. This makes navigating Zabbix easier.
Note: Remember to change how Zabbix Server will connect to the Agent on this node, either with IP address or DNS. Note that the default IP address points to the Zabbix Server.
In the Zabbix Web GUI, navigate to Data Collection > Templates and click on the Import button in the top right-hand corner. You can find the templates file at the following download link:
LINK to zabbix templates
Once you have added all your hosts to the Data Collection section, we can link all Qlik Sense servers in a cluster using the same templates. Zabbix will automatically populate metrics where these performance counters are found. From Data Collection > Hosts, select all your Qlik Sense servers and click on "Mass update". In the dialog that comes up, select the "Link templates" checkbox. Here you can link/replace/unlink templates across many servers in bulk.
Select "Link" and click on the "Select" button. This new panel will let us search for Template groups and make linking a bit easier. The Template Group we provided contains 4 individual templates.
Fig 2: Mass update panel
Fig 3: Search for Template Group
Once you Select and Update on the main panel, all selected Hosts will receive all items contained in the templates, and populate all graphs and Dashboards automatically.
To review your data, navigate to Monitoring > Hosts and click on the "Dashboards" or "Graphs" link for any node, here is the default view when all Qlik Sense templates are linked to a node:
Fig 5: Repository Service metrics - Example
We will query the Engine Healthcheck end-point on QlikServer3 (our consumer node) and extract usage metrics from by parsing the JSON output.
We will be using a new Anonymous Access Virtual Proxy set up on each node. This Virtual Proxy will only Balance on the node it represents, to ensure we extract meaningful metrics from the Engine and we won't be load-balanced by the Proxy service across multiple nodes. There won't be a way to determine which node is responding, without looking at DevTools in your browser. You can also use Header or Certificate authentication in the HTTP Agent configuration.
Once the Virtual Proxy is configured with Anonymous Only access, we can use this new prefix to configure our HTTP Agent in Zabbix.
In the Zabbix web GUI, go to Data collection > Hosts. Click on any of your hosts. On tabs at the top of the pop-up, click on Macros and click on the "Inherited and host macros" button. Once the list has loaded, search for the following Macro: {$VP_PREFIX}. This is set by default to "anon". Click on "Change" and set Macro value to your custom Virtual Proxy Prefix for Engine diagnostics, and click Update. The Virtual Proxy prefix will have to be changed on each node for the "Engine Performance via HTTP Agent" item to work. Alterantively, you can modify the MACRO value for the Template, this will replicate the changes across all nodes associated to this Template.
Fig 6: Changing Host Macros from Inherited values
To make this change at the Template level, go to Data collection > Templates. Search for the "Engine Performance via HTTP Agent" and click on the Template. Navigate to the Macros tab in the pop-up and add your Virtual Proxy Prefix here to make this the new default for your environment. No further changes to Node configuration are required at this point.
Fig 7: Changing Macros at the Template level
The Zabbix templates provided in this article contain the following Engine metric JSONParsers:
These are the same performance counters that you can see in the Engine Health section in QMC.
Stay tuned to new releases of the Monitoring Templates. Feel free to customise these to your needs and share with the Community.
Environment
By default, Qlik Talend Data Catalog will not be able to trace the lineage of Qlik Talend Studio jobs that use dynamic components such as tDBJava. To extract lineage correctly, there are multiple steps that need to be followed. Below is an example where the JavaRow lineage fails to show because Talend Data Catalog is not able to parse the lineage of tJavaRow and instead creates duplicate columns, unintentionally splitting the lineage for each column:
The following versions are required to trace lineage for complex Qlik Talend Studio components:
-vm
C:\Talend\Studio-QTC\zulu17.48....
-vmargs
-Xms512m
-Xmx1536m
-Dfile.encoding=UTF-8
-Dtalend.lineage.enabled=true
-XX:+UseG1GC
-XX:+UseStringDeduplication
-XX:MaxMetaspaceSize=512m
--add-modules=ALL-SYSTEM
Upgrading Qlik Compose across multiple versions requires a specific upgrade path. See Qlik Compose December 2024 Initial Release Notes for details.
After an upgrade, following the steps from 2022.5 to 2023.11 and finally 2024.12, Qlik Compose returns a generic UI error during connection tests.
UI ERROR
Unable to connect to the remote server
Reviewing the Qlik Compose server log reveals the Java process is failing to start entirely:
[INFO ] Java Server: .
[ERROR] Java Server: Error: Could not create the Java Virtual Machine.
[ERROR] Java Server: Error: A fatal exception has occurred. Program will exit.
[INFO ] Java Server: <JAVA_HOME>/lib/ext exists, extensions mechanism no longer supported; Use -classpath instead.
[WARN ] The Compose java server was restarted.
[ERROR] Java Server: Error: Could not create the Java Virtual Machine.
[ERROR] Java Server: Error: A fatal exception has occurred. Program will exit.
While the Qlik Compose server is running, the Java agent is down.
The details upgrade path followed was:
When the second hop was done, the installation recreated the /ext folder within Compose/java/lib/jre/lib with no files within. As Compose 2024.12 uses Java 17, it doesn’t support ext folder.
This article documents the basic steps to configure the SAML integration between Qlik Sense Enterprise on Windows (Client-Managed) and Microsoft Entra ID. By connecting these two platforms, administrators can control which users are allowed to access Qlik Sense directly from Entra ID, provide users with seamless single sign-on using their Microsoft accounts, and manage identities from a centralized location.
If you are looking for instructions for Qlik Cloud Analytics, see How To: Configure Qlik Sense Enterprise SaaS to use Azure AD as an IdP.
Content
To get started, you need the following items:
All the following steps are taken in Qlik Sense Enterprise on Windows.
You can test the single sign-on setup either from the Microsoft Entra ID portal by selecting Test, or by navigating directly to the Qlik Sense sign-on URL and starting the login process from there.
The TDQRules qlikrules jar file is missing even after enabling the component in the feature manager.
This is caused by QTDQ-1404 and addressed in the 2026-04 release of Qlik Talend Studio. The issue is triggered by a space or special character in the install folder name.
Workaround:
Move or install Qlik Talend Studio in a different folder than the default \Program Files folder, and make sure that there is no blank space or special character in the new install path.
QTDQ-1404
Click here for Video Transcript
Note: The concept ’ UPSERT MODE’ and 'MERGE MODE' is not documented in the User Guide. i.e. it is not a word you can search for in the User Guide and is not a key word in the Replicate UI.
UPSERT MODE: Change an update to an insert if the row doesn't exists on the target
MERGE MODE: Change an insert to an update if the row already exists on the target
Use MERGE MODE: i.e. configure the task under: task setting --> Error Handling --> Apply Conflicts --> ‘Duplicate key when applying INSERT:’ UPDATE the existing target record
Use UPSERT MODE: i.e. configure the task under: task setting --> Error Handling --> Apply Conflicts --> ‘No record found for applying an UPDATE:’ INSERT the missing target record
Batch Apply and Transactional Apply modes:
There is a big difference in how these Upsert/Merge settings work depending of whether the task is in 'Batch' or 'Transactional' Apply mode.
Batch Apply mode:
Either option (Upsert/Merge) does an unconditional Delete of all rows in the batch, followed by an Insert of all rows.
Note: The other thing to note is that with this setting the actual update that fails is inserted in a way that may not be obvious and could cause issue with downstream processing. In batch apply mode the task will actually issue a pair of transactions (1st a delete of the record and then 2nd an insert) this pair of transactions is unconditional and will result in a "newly inserted row every time the record is updated on the source.
Transactional Apply mode:
Either option (Upsert/Merge) - the original statement is run and if it errors out then the switch is done (try and catch).
Insert in transactional apply mode, the insert statement will be performed in a "try / catch" fashion. The insert statement will be run and only if it fails will it be switched to an update statement .
In transactional apply mode, the update will be performed in a "try / catch" fashion. The update will be run and only if it fails will it be switched to an insert statement .
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
What is the default user session timeout for Qlik Sense Business and Qlik Sense Enterprise SaaS? Can the session timeout for Qlik Cloud be changed?
The default (fixed) value is set to 30 minutes. This is controlled by SESSION_TTL.
It is not currently possible to adjust the session timeouts in the Qlik Cloud.
NPrinting Initial Installation Fails with Error 0x080070643 (or similar)
If any other version of NPrinting (ie NP 16 and earlier client server track) has been installed here previously or other unrelated software is installed, it would be recommended to reinstall Windows Server OS to ensure a clean start before installing NPrinting Server. (...or if this target Windows Server has been repurposed for use with NPrinting from some other function. It could already be damaged ie: damaged registry files).
*It is best to have a clean slate when installing Qlik NPrinting Server for the first time or if you suspect that the underlying Windows Server has become corrupted or damaged, or even when upgrading and Error 0x080070643 appears *
On Red Hat Enterprise Linux v9.7, `OPENSSL_3.4.0' not found message may appear when running system commands. The message explicitly references Qlik Replicate's bundled libraries, which can make it look as though the error is a product issue.
For example, when running:
systemctl start areplicate
You may see:
systemctl: /opt/attunity/replicate/lib/libcrypto.so.3: version `OPENSSL_3.4.0' not found (required by /usr/lib64/systemd/libsystemd-shared-252.so)
This phrasing makes it clear that the error message itself points to Qlik Replicate’s libraries, while also explaining why someone might mistakenly think it’s a product defect.
arep_login.sh or include Qlik Replicate’s library path when executing commands that are unrelated to Qlik Replicate. This ensures that system tools use the correct libraries from /lib64 rather than the older bundled versions provided by Qlik Replicate.
Red Hat Enterprise Linux 9.7 ships with OpenSSL version 3.4 or newer, while Qlik Replicate (v2025.11.0.286) bundles its own OpenSSL libraries that only support versions up to 3.0.x. When the Qlik Replicate environment is activated, its library path (LD_LIBRARY_PATH) takes precedence, so system commands such as systemctl may load the older bundled libraries instead of the system’s /lib64 versions. This mismatch leads directly to the OPENSSL_3.4.0 not found error message.