Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
This video will demonstrate how to install and configure Qlik-CLI for SaaS editions of Qlik Sense.
Content:
get-command qlik
choco install qlik-cli
if ( -not (Test-Path $PROFILE) ) {
echo "" > $PROFILE
}qlik completion ps > "./qlik_completion.ps1" # Create a file containing the powershell completion.
. ./qlik_completion.ps1 # Source the completion.
Advanced and additional instructions as seen in the video can be found at Qlik-CLI on Qlik.Dev. Begin with Get Started.
Content
The information in this article and video is provided as is. If you need assistance with Zabbix, please engage with Zabbix directly.
The environment being demonstrated in this article consists of one Central Node and Two Worker Nodes. Worker 1 is a Consumption node where both Development and Production apps are allowed. Worker 2 is a dedicated Scheduler Worker node where all reloads will be directed. Central Node is acting as a Scheduler Manager.
The Zabbix Monitoring appliance can be downloaded and configured in a number of ways, including direct install on a Linux server, OVF templates and self-hosting via Docker or Kubernetes. In this example we will be using Docker. We assume you have a working docker engine running on a server or your local machine. Docker Desktop is a great way to experiment with these images and evaluate whether Zabbix fits in your organisation.
This will include all necessary files to get started, including docker compose stack definitions supporting different base images, features and databases, such as MySQL or PostgreSQL. In our example, we will invoke one of the existing Docker compose files which will use PostgreSQL as our database engine.
Source: https://www.zabbix.com/documentation/current/en/manual/installation/containers#docker-compose
git clone https://github.com/zabbix/zabbix-docker.git
Here you can modify environment variables as needed, to change things like the Stack / Composition name, default ports and many other settings supported by Zabbix.
cd ./zabbix-docker/env_vars
ls -la #to list all hidden files (.dotfiles)
nano .env_web
In this file, we will change the value for ZBX_SERVER_NAME to something else, like "Qlik STT - Monitoring". Save the changes and we are ready to start up Zabbix Server.
./zabbix-docker folder contains many different docker compose templates, either using public images or locally built (latest and local tags).
You can run your chosen base image and database version with:
docker compose -f compose-file.yaml up -d && docker compose logs -f --since 1m
Or unlink and re-create the symbolic link to compose.yaml, which enables managing the stack without specifying a compose file. Run the following commands inside the zabbix-docker folder to use the latest Ubuntu-based image with PostgreSQL database:
unlink compose.yamlln -s ./docker-compose_v3_ubuntu_pgsql_latest.yaml compose.yamldocker compose up -dIf you skip the -d flag, the Docker stack will start and your command line will be connected to the log output for all containers. The stack will stop if you exit this mode with CTRL+C or by closing the terminal session. Detached mode will run the stack in background. You can still connect to the live log output, pull logs from history, manage the stack state or tear it down using docker compose down.
Pro tip: you will be using docker compose commands often when working with Docker. You can create an alias in most shells to a short-hand, such as "dc = docker compose". This will still accept all following verbs, such as start|stop|restart|up|down|logs and all following flags. docker compose up -d && docker compose logs -f --since 1m would become dc up -d && dc logs -f --since 1m.
Use the IP address of your Docker host: http://IPADDRESS or https://IPADDRESS.
The Zabbix server stack can be hosted behind a Reverse Proxy.
The default username is Admin and the default password is zabbix. They are case sensitive.
Download link: https://www.zabbix.com/download_agents, in this case download the Windows installer MSI.
After Agent is installed, in Zabbix go to Data Collection > Hosts and click on Create host in the top right-hand corner. Provide details like hostname and port to connect to the Agent, a display name and adjust any other parameters. You can join clusters with Host groups. This makes navigating Zabbix easier.
Note: Remember to change how Zabbix Server will connect to the Agent on this node, either with IP address or DNS. Note that the default IP address points to the Zabbix Server.
In the Zabbix Web GUI, navigate to Data Collection > Templates and click on the Import button in the top right-hand corner. You can find the templates file at the following download link:
LINK to zabbix templates
Once you have added all your hosts to the Data Collection section, we can link all Qlik Sense servers in a cluster using the same templates. Zabbix will automatically populate metrics where these performance counters are found. From Data Collection > Hosts, select all your Qlik Sense servers and click on "Mass update". In the dialog that comes up, select the "Link templates" checkbox. Here you can link/replace/unlink templates across many servers in bulk.
Select "Link" and click on the "Select" button. This new panel will let us search for Template groups and make linking a bit easier. The Template Group we provided contains 4 individual templates.
Fig 2: Mass update panel
Fig 3: Search for Template Group
Once you Select and Update on the main panel, all selected Hosts will receive all items contained in the templates, and populate all graphs and Dashboards automatically.
To review your data, navigate to Monitoring > Hosts and click on the "Dashboards" or "Graphs" link for any node, here is the default view when all Qlik Sense templates are linked to a node:
Fig 5: Repository Service metrics - Example
We will query the Engine Healthcheck end-point on QlikServer3 (our consumer node) and extract usage metrics from by parsing the JSON output.
We will be using a new Anonymous Access Virtual Proxy set up on each node. This Virtual Proxy will only Balance on the node it represents, to ensure we extract meaningful metrics from the Engine and we won't be load-balanced by the Proxy service across multiple nodes. There won't be a way to determine which node is responding, without looking at DevTools in your browser. You can also use Header or Certificate authentication in the HTTP Agent configuration.
Once the Virtual Proxy is configured with Anonymous Only access, we can use this new prefix to configure our HTTP Agent in Zabbix.
In the Zabbix web GUI, go to Data collection > Hosts. Click on any of your hosts. On tabs at the top of the pop-up, click on Macros and click on the "Inherited and host macros" button. Once the list has loaded, search for the following Macro: {$VP_PREFIX}. This is set by default to "anon". Click on "Change" and set Macro value to your custom Virtual Proxy Prefix for Engine diagnostics, and click Update. The Virtual Proxy prefix will have to be changed on each node for the "Engine Performance via HTTP Agent" item to work. Alterantively, you can modify the MACRO value for the Template, this will replicate the changes across all nodes associated to this Template.
Fig 6: Changing Host Macros from Inherited values
To make this change at the Template level, go to Data collection > Templates. Search for the "Engine Performance via HTTP Agent" and click on the Template. Navigate to the Macros tab in the pop-up and add your Virtual Proxy Prefix here to make this the new default for your environment. No further changes to Node configuration are required at this point.
Fig 7: Changing Macros at the Template level
The Zabbix templates provided in this article contain the following Engine metric JSONParsers:
These are the same performance counters that you can see in the Engine Health section in QMC.
Stay tuned to new releases of the Monitoring Templates. Feel free to customise these to your needs and share with the Community.
Environment
This article explains how to implement SAML for NPrinting with Azure as the IdP.
To implement Azure SAML in Nprinting, the following needs to be done:
Generate a Metadata XML file
Federation Metadata XML: Download
<?xml version="1.0"?>
<EntityDescriptor xmlns="urn:oasis:names:tc:SAML:2.0:metadata" entityID="https://sts.windows.net/b26e23cf-787a-40e8-9d17-f0c9f9ad0821/">
<IDPSSODescriptor xmlns:ds="http://www.w3.org/2000/09/xmldsig#" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
<KeyDescriptor use="signing">
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>MIIC8DCCAdigAwIBAgIQFUUu6ZQHg5FJ...Ud8tf9A/4A6+2SZm34gf8gcVPTXT/a</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</KeyDescriptor>
<SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://login.microsoftonline.com/b26e23cf-787a-40e8-9d17-f0c9f9ad0821/saml2"/>
<SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="https://login.microsoftonline.com/b26e23cf-787a-40e8-9d17-f0c9f9ad0821/saml2"/>
</IDPSSODescriptor>
</EntityDescriptor>Potential Troubleshooting Steps
This article documents the basic steps to configure the SAML integration between Qlik Sense Enterprise on Windows (Client-Managed) and Microsoft Entra ID. By connecting these two platforms, administrators can control which users are allowed to access Qlik Sense directly from Entra ID, provide users with seamless single sign-on using their Microsoft accounts, and manage identities from a centralized location.
If you are looking for instructions for Qlik Cloud Analytics, see How To: Configure Qlik Sense Enterprise SaaS to use Azure AD as an IdP.
Content
To get started, you need the following items:
All the following steps are taken in Qlik Sense Enterprise on Windows.
You can test the single sign-on setup either from the Microsoft Entra ID portal by selecting Test, or by navigating directly to the Qlik Sense sign-on URL and starting the login process from there.
Snowflake supports using key pair authentication for enhanced authentication security as an alternative to basic authentication (i.e. username and password). This article covers end-to-end setup for Key Pair Authentication in Snowflake and Qlik Replicate.
This authentication method requires, at minimum, a 2048-bit RSA key pair. You can generate the Privacy Enhanced Mail (i.e. PEM) private-public key pair using OpenSSL.
Qlik Replicate will use the ODBC driver to connect snowflake and ODBC is one of the supported clients which will support key pair authentication.
Let's assume, you decided to use key pair authentication for the Snowflake user which is used in Qlik Replicate to connect to Snowflake. You have to follow the below process to convert user authentication from basic to key pair.
When creating a Key Pair for Qlik Stitch Snowflake 'Destination' connections, you must set up a nocrypt private key before creating the public key.
You can generate either an encrypted version of the private key or an unencrypted version of the private key.
To generate an unencrypted version use the following command in the command prompt:
$ openssl genrsa 2048|openssl pkcs8 -topk8 -inform PEM -out rsa_key.p8 -nocrypt
To generate an encrypted version (which omits -nocrypt) use:
$ openssl genrsa 2048|openssl pkcs8 -topk8 -v2 des3 -inform PEM -out rsa_key.p8
In our example case, we generate an encrypted version of a private key.
We:
This generates a private key in PEM format:
From the command line, we generate the public key by referencing the private key. The following command assumes the private key is encrypted and contained in the file named rsa_key.p8.
When it requests a passphrase, use the same password that we generated in step 1.
openssl rsa -in rsa_key.p8 -pubout -out rsa_key.pub
This command generates the public key in PEM format:
Copy the public and private key files to a local directory for storage and record the path to the files. Note that the private key is stored using the PKCS#8 (Public Key Cryptography Standards) format and is encrypted using the passphrase you specified in the previous step.
However, the file should still be protected from unauthorized access using the file permission mechanism provided by your operating system. It is your responsibility to secure the file when it is not being used.
Describe the user to see current information. We can see that there is no public key assigned to the HDW user. Therefore, the user needs to use basic authentication.
Execute an ALTER USER command to assign the public key to a Snowflake user.
Execute a DESCRIBE USER command to verify the user’s public key.
Qlik Replicate
Snowflake Target
After distributing the Consumption Report app from Qlik Cloud Administration > Settings, scheduled reloads of the app fail with the following error:
Error: $(MUST_INCLUDE= [lib://snowflake_external_share:DataFiles/Capacity_Usage_Script_PROD.txt] cannot access the local file system in current script mode. Try including with LIB path.
The Consumption Report app isn't meant to be reloaded. The app should be distributed from Qlik Cloud Administration > Settings each day. Refer to Distributing detailed consumption reports for details:
Redistribute the app to obtain the most recent data. Apps stored on your tenant exist as separate instances and are not replaced by newer ones.
On the Talend side, refer to Distributing Data Capacity Reporting App for Talend Management Console for details on how to set up capacity reporting.
To automatically redistribute the app, see Automate deployment of the Capacity consumption app with Qlik Automate.
The Report Consumption app is meant to be distributed from Qlik Cloud Administration > Settings and not updated by a scheduled reload of the app.
This article documents how to configure a Qlik tenant to send emails using MS365.
The information in this article is provided as-is and will be used at your discretion. Depending on the tool(s) used, customization(s), and/or other factors, ongoing support on the solution below may not be provided by Qlik Support.
An account with an active Office365 license is required for this setup.
First, we configure the MS365 tenant to support the configuration.
Once you have an account set up on the MS365 side, let's go to the Microsoft Tenant settings:
Setting Application permissions to Mail.Send grants the application to use any email address from your organization.
If mail.send is blocked by Active Directory group policy, mail delivery will fail. Please consult your AD administrator(s) if mail delivery fails when the above steps are followed.
The event payloads emitted by the Qlik Cloud webhooks service are changing. Qlik is replacing a legacy event format with a new cloud event format.
Any legacy events (such as anything not already cloud event compliant) will be updated to a temporary hybrid event containing both legacy and cloud event payloads. This will start on or after November 3, 2025.
Please consider updating your integrations to use the new fields once added.
A formal deprecation with at least a 6-month notice will be provided via the Qlik Developer changelog. After that period, hybrid events will be replaced entirely by cloud events.
Webhook automations in Qlik Automate will not be impacted at this time.
The webhooks service in Qlik Cloud enables you to subscribe to notifications when your Qlik Cloud tenant generates specific events.
At the time of writing, the following legacy events are available:
| Service | Event name | Event type | When is event generated |
| API keys | API key validation failed | com.qlik.v1.api-key.validation.failed | The tenant tries to use an API key which is expired or revoked |
| Apps (Analytics apps) | App created | com.qlik.v1.app.created | A new analytics app is created |
| Apps (Analytics apps) | App deleted | com.qlik.v1.app.deleted | An analytics app is deleted |
| Apps (Analytics apps) | App exported | com.qlik.v1.app.exported | An analytics app is exported |
| Apps (Analytics apps) | App reload finished | com.qlik.v1.app.reload.finished | An analytics app has finished refreshing on an analytics engine (not it may not be saved yet) |
| Apps (Analytics apps) | App published | com.qlik.v1.app.published | An analytics app is published from a personal or shared space to a managed space |
| Apps (Analytics apps) | App data updated | com.qlik.v1.app.data.updated | An analytics app is saved to persistent storage |
| Automations (Automate) | Automation created | com.qlik.v1.automation.created | A new automation is created |
| Automations (Automate) | Automation deleted | com.qlik.v1.automation.deleted | An automation is deleted |
| Automations (Automate) | Automation updated | com.qlik.v1.automation.updated | An automation has been updated and saved to persistent storage |
| Automations (Automate) | Automation run started | com.qlik.v1.automation.run.started | An automation run began execution |
| Automations (Automate) | Automation run failed | com.qlik.v1.automation.run.failed | An automation run failed |
| Automations (Automate) | Automation run ended | com.qlik.v1.automation.run.ended | An automation run finished successfully |
| Reloads (Analytics reloads) | Reload finished | com.qlik.v1.reload.finished | An analytics app has been refreshed and saved |
| Users | User created | com.qlik.v1.user.created | A new user is created |
| Users | User deleted | com.qlik.v1.user.deleted | A user is deleted |
Any events not listed above will remain as-is, as they already adhere to the cloud event format.
Each event will change to a new structure. The details included in the payloads will remain the same, but some attributes will be available in a different location.
The changes being made:
data object.cloudEventsVersion is replaced by specversion. For most events this will be from cloudEventsVersion: 0.1 to specversion: 1.0+.contentType is replaced by datacontentype to describe the media type of the data object.eventId is replaced by id.eventTime is replaced by time.eventTypeVersion is not present in the future schema.eventType is replaced by type.extensions.actor is replaced by authtype and authclaims.extensions.updates is replaced by data._updatesextensions.meta, and any other direct objects on extensions are replaced by equivalents in data where relevant.extensions object will be moved to the root and renamed to be lowercase if needed, such astenantId,userId,spaceId, etc.
This is the current legacy payload of the automation created event:
{
"cloudEventsVersion": "0.1",
"source": "com.qlik/automations",
"contentType": "application/json",
"eventId": "f4c26f04-18a4-4032-974b-6c7c39a59816",
"eventTime": "2025-09-01T09:53:17.920Z",
"eventTypeVersion": "1.0.0",
"eventType": "com.qlik.v1.automation.created",
"extensions": {
"ownerId": "637390ef6541614d3a88d6c3",
"spaceId": "685a770f2c31b9e482814a4f",
"tenantId": "BL4tTJ4S7xrHTcq0zQxQrJ5qB1_Q6cSo",
"userId": "637390ef6541614d3a88d6c3"
},
"data": {
"connectorIds": {},
"containsBillable": null,
"createdAt": "2025-09-01T09:53:17.000000Z",
"description": null,
"endpointIds": {},
"id": "cae31848-2825-4841-bc88-931be2e3d01a",
"lastRunAt": null,
"lastRunStatus": null,
"name": "hello world",
"ownerId": "637390ef6541614d3a88d6c3",
"runMode": "manual",
"schedules": {},
"snippetIds": {},
"spaceId": "685a770f2c31b9e482814a4f",
"state": "available",
"tenantId": "BL4tTJ4S7xrHTcq0zQxQrJ5qB1_Q6cSo",
"updatedAt": "2025-09-01T09:53:17.000000Z"
}
}
This will be the temporary hybrid event for automation created:
{
// cloud event fields
"id": "f4c26f04-18a4-4032-974b-6c7c39a59816",
"time": "2025-09-01T09:53:17.920Z",
"type": "com.qlik.v1.automation.created",
"userid": "637390ef6541614d3a88d6c3",
"ownerid": "637390ef6541614d3a88d6c3",
"tenantid": "BL4tTJ4S7xrHTcq0zQxQrJ5qB1_Q6cSo",
"description": "hello world",
"datacontenttype": "application/json",
"specversion": "1.0.2",
// legacy event fields
"eventId": "f4c26f04-18a4-4032-974b-6c7c39a59816",
"eventTime": "2025-09-01T09:53:17.920Z",
"eventType": "com.qlik.v1.automation.created",
"extensions": {
"userId": "637390ef6541614d3a88d6c3",
"spaceId": "685a770f2c31b9e482814a4f",
"ownerId": "637390ef6541614d3a88d6c3",
"tenantId": "BL4tTJ4S7xrHTcq0zQxQrJ5qB1_Q6cSo",
},
"contentType": "application/json",
"eventTypeVersion": "1.0.0",
"cloudEventsVersion": "0.1",
// unchanged event fields
"data": {
"connectorIds": {},
"containsBillable": null,
"createdAt": "2025-09-01T09:53:17.000000Z",
"description": null,
"endpointIds": {},
"id": "cae31848-2825-4841-bc88-931be2e3d01a",
"lastRunAt": null,
"lastRunStatus": null,
"name": "hello world",
"ownerId": "637390ef6541614d3a88d6c3",
"runMode": "manual",
"schedules": {},
"snippetIds": {},
"spaceId": "685a770f2c31b9e482814a4f",
"state": "available",
"tenantId": "BL4tTJ4S7xrHTcq0zQxQrJ5qB1_Q6cSo",
"updatedAt": "2025-09-01T09:53:17.000000Z"
},
"source": "com.qlik/automations"
}
Ever wanted to brand or customize the default Qlik Sense Login page?
The functionality exists, and it's really as simple as just designing your HTML page and 'POSTing' it into your environment.
We've all seen the standard Qlik Sense Login page, this article is all about customizing this page.
This customization is provided as is. Qlik Support cannot provide continued support of the solution. For assistance, reach out to our Professional Services or engage in our active Integrations forum.
To customize the page:
{
"id": "8817d7ab-e9b2-4816-8332-f8cb869b27c2",
"createdDate": "2020-03-23T15:39:33.540Z",
"modifiedDate": "2020-05-20T18:46:13.995Z",
"modifiedByUserName": "INTERNAL\\sa_api",
"customProperties": [],
"settings": {
"id": "8817d7ab-e9b2-4816-8332-f8cb869b27c2",
"createdDate": "2020-03-23T15:39:33.540Z",
"modifiedDate": "2020-05-20T18:46:13.995Z",
"modifiedByUserName": "INTERNAL\\sa_api",
"listenPort": 443,
"allowHttp": true,
"unencryptedListenPort": 80,
"authenticationListenPort": 4244,
"kerberosAuthentication": false,
"unencryptedAuthenticationListenPort": 4248,
"sslBrowserCertificateThumbprint": "e6ee6df78f9afb22db8252cbeb8ad1646fa14142",
"keepAliveTimeoutSeconds": 10,
"maxHeaderSizeBytes": 16384,
"maxHeaderLines": 100,
"logVerbosity": {
"id": "8817d7ab-e9b2-4816-8332-f8cb869b27c2",
"createdDate": "2020-03-23T15:39:33.540Z",
"modifiedDate": "2020-05-20T18:46:13.995Z",
"modifiedByUserName": "INTERNAL\\sa_api",
"logVerbosityAuditActivity": 4,
"logVerbosityAuditSecurity": 4,
"logVerbosityService": 4,
"logVerbosityAudit": 4,
"logVerbosityPerformance": 4,
"logVerbositySecurity": 4,
"logVerbositySystem": 4,
"schemaPath": "ProxyService.Settings.LogVerbosity"
},
"useWsTrace": false,
"performanceLoggingInterval": 5,
"restListenPort": 4243,
"virtualProxies": [
{
"id": "58d03102-656f-4075-a436-056d81144c1f",
"prefix": "",
"description": "Central Proxy (Default)",
"authenticationModuleRedirectUri": "",
"sessionModuleBaseUri": "",
"loadBalancingModuleBaseUri": "",
"useStickyLoadBalancing": false,
"loadBalancingServerNodes": [
{
"id": "f1d26a45-b0dd-4be1-91d0-34c698e18047",
"name": "Central",
"hostName": "qlikdemo",
"temporaryfilepath": "C:\\Users\\qservice\\AppData\\Local\\Temp\\",
"roles": [
{
"id": "2a6a0d52-9bb4-4e74-b2b2-b597fa4e4470",
"definition": 0,
"privileges": null
},
{
"id": "d2c56b7b-43fd-44ad-a12f-59e778ce575a",
"definition": 1,
"privileges": null
},
{
"id": "37244424-96ae-4fe5-9522-088a0e9679e3",
"definition": 2,
"privileges": null
},
{
"id": "b770516e-fe8a-43a8-a7a4-318984ee4bd6",
"definition": 3,
"privileges": null
},
{
"id": "998b7df8-195f-4382-af18-4e0c023e7f1c",
"definition": 4,
"privileges": null
},
{
"id": "2a5325f4-649b-4147-b0b1-f568be1988aa",
"definition": 5,
"privileges": null
}
],
"serviceCluster": {
"id": "b07fc5f2-f09e-4676-9de6-7d73f637b962",
"name": "ServiceCluster",
"privileges": null
},
"privileges": null
}
],
"authenticationMethod": 0,
"headerAuthenticationMode": 0,
"headerAuthenticationHeaderName": "",
"headerAuthenticationStaticUserDirectory": "",
"headerAuthenticationDynamicUserDirectory": "",
"anonymousAccessMode": 0,
"windowsAuthenticationEnabledDevicePattern": "Windows",
"sessionCookieHeaderName": "X-Qlik-Session",
"sessionCookieDomain": "",
"additionalResponseHeaders": "",
"sessionInactivityTimeout": 30,
"extendedSecurityEnvironment": false,
"websocketCrossOriginWhiteList": [
"qlikdemo",
"qlikdemo.local",
"qlikdemo.paris.lan"
],
"defaultVirtualProxy": true,
"tags": [],
"samlMetadataIdP": "",
"samlHostUri": "",
"samlEntityId": "",
"samlAttributeUserId": "",
"samlAttributeUserDirectory": "",
"samlAttributeSigningAlgorithm": 0,
"samlAttributeMap": [],
"jwtAttributeUserId": "",
"jwtAttributeUserDirectory": "",
"jwtAudience": "",
"jwtPublicKeyCertificate": "",
"jwtAttributeMap": [],
"magicLinkHostUri": "",
"magicLinkFriendlyName": "",
"samlSlo": false,
"privileges": null
},
{
"id": "a8b561ec-f4dc-48a1-8bf1-94772d9aa6cc",
"prefix": "header",
"description": "header",
"authenticationModuleRedirectUri": "",
"sessionModuleBaseUri": "",
"loadBalancingModuleBaseUri": "",
"useStickyLoadBalancing": false,
"loadBalancingServerNodes": [
{
"id": "f1d26a45-b0dd-4be1-91d0-34c698e18047",
"name": "Central",
"hostName": "qlikdemo",
"temporaryfilepath": "C:\\Users\\qservice\\AppData\\Local\\Temp\\",
"roles": [
{
"id": "2a6a0d52-9bb4-4e74-b2b2-b597fa4e4470",
"definition": 0,
"privileges": null
},
{
"id": "d2c56b7b-43fd-44ad-a12f-59e778ce575a",
"definition": 1,
"privileges": null
},
{
"id": "37244424-96ae-4fe5-9522-088a0e9679e3",
"definition": 2,
"privileges": null
},
{
"id": "b770516e-fe8a-43a8-a7a4-318984ee4bd6",
"definition": 3,
"privileges": null
},
{
"id": "998b7df8-195f-4382-af18-4e0c023e7f1c",
"definition": 4,
"privileges": null
},
{
"id": "2a5325f4-649b-4147-b0b1-f568be1988aa",
"definition": 5,
"privileges": null
}
],
"serviceCluster": {
"id": "b07fc5f2-f09e-4676-9de6-7d73f637b962",
"name": "ServiceCluster",
"privileges": null
},
"privileges": null
}
],
"authenticationMethod": 1,
"headerAuthenticationMode": 1,
"headerAuthenticationHeaderName": "userid",
"headerAuthenticationStaticUserDirectory": "QLIKDEMO",
"headerAuthenticationDynamicUserDirectory": "",
"anonymousAccessMode": 0,
"windowsAuthenticationEnabledDevicePattern": "Windows",
"sessionCookieHeaderName": "X-Qlik-Session-Header",
"sessionCookieDomain": "",
"additionalResponseHeaders": "",
"sessionInactivityTimeout": 30,
"extendedSecurityEnvironment": false,
"websocketCrossOriginWhiteList": [
"qlikdemo",
"qlikdemo.local"
],
"defaultVirtualProxy": false,
"tags": [],
"samlMetadataIdP": "",
"samlHostUri": "",
"samlEntityId": "",
"samlAttributeUserId": "",
"samlAttributeUserDirectory": "",
"samlAttributeSigningAlgorithm": 0,
"samlAttributeMap": [],
"jwtAttributeUserId": "",
"jwtAttributeUserDirectory": "",
"jwtAudience": "",
"jwtPublicKeyCertificate": "",
"jwtAttributeMap": [],
"magicLinkHostUri": "",
"magicLinkFriendlyName": "",
"samlSlo": false,
"privileges": null
}
],
"formAuthenticationPageTemplate": "",
"loggedOutPageTemplate": "",
"errorPageTemplate": "",
"schemaPath": "ProxyService.Settings"
},
"serverNodeConfiguration": {
"id": "f1d26a45-b0dd-4be1-91d0-34c698e18047",
"name": "Central",
"hostName": "qlikdemo",
"temporaryfilepath": "C:\\Users\\qservice\\AppData\\Local\\Temp\\",
"roles": [
{
"id": "2a6a0d52-9bb4-4e74-b2b2-b597fa4e4470",
"definition": 0,
"privileges": null
},
{
"id": "d2c56b7b-43fd-44ad-a12f-59e778ce575a",
"definition": 1,
"privileges": null
},
{
"id": "37244424-96ae-4fe5-9522-088a0e9679e3",
"definition": 2,
"privileges": null
},
{
"id": "b770516e-fe8a-43a8-a7a4-318984ee4bd6",
"definition": 3,
"privileges": null
},
{
"id": "998b7df8-195f-4382-af18-4e0c023e7f1c",
"definition": 4,
"privileges": null
},
{
"id": "2a5325f4-649b-4147-b0b1-f568be1988aa",
"definition": 5,
"privileges": null
}
],
"serviceCluster": {
"id": "b07fc5f2-f09e-4676-9de6-7d73f637b962",
"name": "ServiceCluster",
"privileges": null
},
"privileges": null
},
"tags": [],
"privileges": null,
"schemaPath": "ProxyService"
}
If your login page does not work and you need to revert back to the default, simply do a GET call on your proxy service, and set formAuthenticationPageTemplate back to an empty string:
formAuthenticationPageTemplate": ""
The scenario: A Qlik Sense Enterprise on Windows environment is set up to use Azure SAML (AD FS) for authentication.
On the Azure side, the Token Signing Certificate embedded as the X509Certificate in the SAML IdP metadata within the Qlik virtual proxy configuration expired several weeks ago. A new certificate has not yet been issued.
It is still possible to log in to Qlik Sense Enterprise on Windows.
This behavior may raise the questions of:
In SAML, the IdP (Azure AD/AD FS) signs the assertion with its private key, and Qlik Sense validates it using the public key embedded in the IdP metadata (the X509Certificate).
The expiry date in the certificate is not actively checked by Qlik Sense during assertion validation.
Qlik only verifies that the signature matches the public key it has stored. So even if the certificate is expired, as long as the key pair hasn’t changed and the signature is valid, authentication succeeds. This is a common behavior and not specific to Qlik Sense.
Is this a security concern?
It is not considered a security issue. The expiry matters for trust and compliance, not for the cryptographic check Qlik performs.
When will the expiry become relevant?
Do we need to update the certificate?
Even though Qlik doesn’t break immediately, we recommend updating the IdP metadata in Qlik Sense as soon as Azure issues a new signing certificate. This ensures future-proofing and compliance.
In Qlik Sense Enterprise on Windows, using QRS API '/qrs/App/table' by providing a body returns more than expected rows.
Example:
This behavior has been identified as defect SUPPORT-6127.
It is caused by the default value of the parameter HideCustomPropertyDefinition set to true in the Repository.exe.config file. Changing the parameter from true to false resolves it.
To change the value:
Issue related to the default configuration setting of the parameter HideCustomPropertyDefinition in the Repository.exe.config file.
SUPPORT-6127
Beginning with Qlik Sense Enterprise on Windows 2024, Qlik has extended CSRF protection to WebSockets. For reference, see the Release Notes.
In the case of mashups, extensions, and or other cross-site domain setups, the following two steps are necessary:
Content
The additional response headers are:
Access-Control-Allow-Credentials: true
Access-Control-Expose-Headers: qlik-csrf-token
Localhost and port 8080 are examples. Replace them with the appropriate hostname. Defining the port is optional.
If you have multiple origins, add each to the Host allow list.
Example:
For more information about adding response headers to the Qlik Sense Virtual proxy, see Creating a virtual proxy. Expand the Advanced section to access Additional response headers.
In certain scenarios, the additional headers on the virtual proxy will not be enough and a code change is required. In these cases, you need to request the CSRF token and then send it forward when opening the session on the WebSocket. See Workflow for a visualisation of the process.
An example written in Enigma.js is available here:
The information and example in this article are provided as-is and are not directly supported by Qlik Support. More assistance can be found on the Qlik Integration forum. Professional Services are available to help where needed.
Workflow
To verify if the header information is correctly passed on, capture the web traffic in your browser's debug tool.
Environment
The client Secret for a Single Sign-On Solution has expired.
After successfully logging in with your unique tenant url recovery address
https://type_your_tenant_here.eu.qlikcloud.com/login/recover
when using the current tenant Service Account Owner (SAO) account, logging in to the Qlik Cloud Administration Console fails with:
User allocation required You do not have a valid user allocation. Please contact an administrator for more information
Once you successfully log in to the tenant via recovery link using the SAO (Service Account Owner) credentials, either navigate to the Qlik Cloud Administration Console by using
This will allow you to access the Qlik Cloud Administration Console and update the client secret for your IDP settings.
NOTE: If you are logging in with credentials that are not your own (for example, a user that has left your organization) and the above steps fail with a responseCode: 401 error, it may be necessary to ask Qlik Customer Support to change the Server Account Owner (SAO) to an currently active user in your directory.
Once the SAO change is complete, follow the above resolution steps.
The architecture of the integration at a High-level looks like this:
Qlik Sense Advanced Analytics integration is essentially an extension to Qlik Sense’s expression syntax, and as such it can be used in both Chart Expressions, and in Load Script Expressions.
With this new capability, we are now able to add syntax to a chart expression that tells Qlik Sense that particular expression should not be evaluated on the Qlik Sense server, but instead, all the information and data needed to calculate that expression should be sent via the server side extension on to the backend R system for calculation.
After the advanced analytic calculations are completed, the data is sent back to the Qlik Sense Server and to the client for visualization.
This video shows an example of how Qlik Sense connects to an R server for extending expression capabilities in Qlik Sense Apps while offloading the calculations to the R server engine.
Click here for Video Transcript
In order to start displaying a simple "Hello World" in Qlik Sense using a R-Script, we will do the following:
1. Have R & R-studio installed in your system. (RGui included with R for Windows can also be used) R can be downloaded at https://cloud.r-project.org/
2. We need a package in R to extend R functionality to applications via TCP/IP. The package name is "Rserve()"
Install the package using the below command in RStudio GUI:
install.packages('Rserve')
3. Now we need to invoke that library and start Rserve. In order to do so, execute the below scripts:
library(Rserve) Rserve()
4.The communication method from Sense to R is taken care using gRPC. R is not a supported language in gRPC by default.
So a possible solution for this is to develop a connector in any of the supported languages of gRPC. Qlik provides an open-source connector developed in C# which in turn access Rserve to be able to run R scripts.
qlik-oss/sse-r-plugin
Once you built the connector, start the SSEtoRserve.exe (ideally on the Rserve server itself)
Note: Qlik Support does not support this plugin directly. Inquiries should be submitted via GitHub under sse-r-plugin - Issues
5. Now we will have to configure the plugin:
Add the following line in the settings.ini file:
SSEPlugin=R,localhost:50051
The settings.ini is located in this location:
Add the following line in the settings.ini file:
SSEPlugin=R,localhost:50051
The settings.ini file is located in this location:
a. In the QMC, add a new Analytic Connection.
b. Restart the Qlik Sense Engine service.
Please refer to the screenshot below for creating a new connection.
Note: If the R-Plugin (SSEtoRserve.exe) was installed on the R-Server (where Rserve runs) or another machine, point to that machine name instead of 'localhost'. Also, in multi-node environments with multiple Qlik Sense Engines, even if the plugin was installed on the Central node, make sure to add the Central node's hostname instead of 'localhost' as the other Rim node Engine services need the correct DNS/Netbios name to reach the plugin.
6. Now Open a Qlik Sense App and add a KPI object in the sheet. This can be one of the Apps included with the plugin itself under <storage path>\sse-r-plugin-master\sense_apps
Note that the example apps also need data connections to be created to the data files included with these apps files in the above location.
7. Otherwise, a new app can be created and any data may be loaded for the SSE example below.
8. For the measure, add the following expression which contains an R-script:
R.ScriptEvalStr('paste(q$firstWord, q$secondWord);', 'Hello' as firstWord, 'World' as secondWord)
9. If everything is configured properly, the R-script shown in bold above should be executed fine and it should display a "Hello World" message.
R.ScriptEvalStr('paste(q$firstWord, q$secondWord);', Only([First Word]) as firstWord, Only([Second Word]) as secondWord)
Eight script functions are automatically added to the functionality of the plugin. What is needed to be covered on the plugin side to fulfill the functionality is to implement the EvaluateScript rpc function.
The syntax of these functions is:
<EngineSSEName>.<FunctionName>(Script [,Parameter...])
Where the Script is an R-Script to be evaluated & Parameter is the data sent from Qlik's end.
Here, we use the ScriptEvalStr function which accepts argument of type String & returns a String. The 'paste' function in R concatenates vectors after converting to character. We pass two data fields of type string from Qlik (First Word & Second Word). The R-script then references these data fields through the q dataframe (structure already taken care in R) (q$firstWord and q$secondWord). The script/function finally returns a String back to Qlik Sense.