Discussion Board, Documents and Information related to Qlik Support.
Support Chat
Have a problem with your Qlik Account? We’re here to help!
Start a conversation with our trained Customer Experience experts for Account and Login issues!
Knowledge
Search or browse our knowledge base to find answers to your questions. The content is curated and updated by our global Support team
Support Case Portal
Request secured assistance for your complex and critical issues. Our Support team is here to help effectively troubleshoot your specific organization’s needs.
Support Updates Blog
Read and subscribe to the Support Updates blog. Here you will learn about new service releases, end of product support and general support topics.
Q&A with Qlik
Formally known as Talk to Experts Tuesday, Q&A with Qlik is our newly rebranded live Q&A chat! Meet the Qlik Team and have your questions answered in real time.
Techspert Talks
Hear directly from Qlik Techsperts! Formally known as Support Techspert Thursday, Techspert Talks is a free webinar to facilitate knowledge sharing held on the 3rd Thursday of each month.
Qlik Fix
Qlik Fix is a series of short videos for Qlik customers and partners. It is intended to provide information so that you can solve problems quickly.
The below chart is the release date and end of support (EOS) date for all QlikView product releases. For more information, please see Qlik's On-Premise Products Release Management Policy.
Release | Release Date | EOS Date |
12.70 (May 2022) | May 10, 2022 | May 10, 2024 |
12.60 (May 2021) | May 25, 2021 | May 25, 2023 |
12.50 (April 2020) | April 27, 2020 | April 27, 2022 |
12.40 (April 2019) | April 29, 2019 | April 27, 2022 |
12.30 (November 2018) | November 6, 2018 | April 30, 2021 |
12.20 (November 2017) | November 14, 2017 | November 30, 2020 |
12.1 | November 17, 2016 | November 14, 2019 |
12.00 | December 8, 2015 | November 16, 2018 |
11.2 | December 12, 2012 | December 31, 2020 |
The below chart is the release date and end of support (EOS) date for all Qlik Sense Enterprise on Windows product releases. For more information, please see Qlik's On-Premise Products Release Management Policy.
Release | Release Date | EOS Date |
February 2022 | February 15, 2022 | February 15, 2024 |
November 2021 | November 8, 2021 | November 8, 2023 |
August 2021 | August 23, 2021 | August 23, 2023 |
May 2021 | May 10, 2021 | May 10, 2023 |
February 2021 | February 9, 2021 | February 9, 2023 |
November 2020 | November 10, 2020 | November 10, 2022 |
September 2020 | September 9, 2020 | September 9, 2022 |
June 2020 | June 10, 2020 | June 10, 2022 |
April 2020 | April 14, 2020 | April 14, 2022 |
February 2020 | February 25, 2020 | February 25, 2022 |
November 2019 | November 11, 2019 | November 11, 2021 |
September 2019 | September 30, 2019 | September 30, 2021 |
June 2019 | June 28, 2019 | June 28, 2021 |
April 2019 | April 25, 2019 | April 25, 2021 |
February 2019 | February 12, 2019 | February 12, 2021 |
November 2018 | November 13, 2018 | November 13, 2020 |
September 2018 | September 11, 2018 | September 11, 2020 |
June 2018 | June 26, 2018 | June 26, 2020 |
April 2018 | April 19, 2018 | April 19, 2020 |
February 2018 | February 13, 2018 | February 13, 2020 |
November 2017 | November 14, 2017 | November 13, 2019 |
September 2017 | September 19, 2017 | September 19, 2019 |
June 2017 | June 29, 2017 | June 29, 2019 |
3.0/3.1/3.2 | June 28, 2016 / September 20, 2016 / February 9, 2017 | June 28, 2018 / September 20, 2018 / February 9, 2019 |
Qlik has been diligently reviewing and testing our product suite since we’ve become aware of the Apache Log4j vulnerability mid-December. We want to ensure Qlik users that your security is our upmost priority. We have addressed multiple vulnerabilities through a series of product patches for supported affected versions and we recommend you update to the most recent releases available, shown in the chart below.
Log4j versions before v2.16 presented the highest threat and all exposed Qlik products have provided patches with at least v2.16 and will all be updated to v2.17.1 or later under the regular release schedule as we are not vulnerable to the CVEs related to 2.17.0
We’d like to direct you to our FAQ document to review should you have any further questions, and we encourage you to comment with any additional questions.
The following products are not affected:
The following products are under review:
The following products are affected. Qlik has provided patches linked here; customers are advised to install the patches at their earliest convenience.
Downloads can be accessed by visiting our new Downloads page on Qlik Community when signed in with your Qlik ID , then selecting the product then the latest release.
Affected Product Version |
CVE-2021-44228 |
CVE-2021-45046 |
CVE-2021-45105 |
CVE-2021-44832 |
Recommended Action |
Log4J Version included in patch |
Compose 2021.8 |
Vulnerable, solved by patch |
Vulnerable, solved by patch
|
Not vulnerable
|
Not vulnerable |
Install 2021.8 SR01 |
Up to 2.16.0 |
Compose 2021.5 |
Vulnerable, solved by patch
|
Vulnerable, solved by patch
|
Not vulnerable
|
Not vulnerable
|
Install 2021.5 SR01 |
Up to 2.16.0 |
Compose 2021.2 |
Vulnerable, solved by patch
|
Vulnerable, solved by patch
|
Not vulnerable
|
Not vulnerable
|
Install 2021.2 SR01 |
Up to 2.16.0 |
C4DW 7.0 |
Vulnerable, solved by patch
|
Vulnerable, solved by patch
|
Not vulnerable
|
Not vulnerable
|
Install 7.0 2021 SR04 |
Up to 2.16.0 |
C4DW 6.6.1 |
Vulnerable, solved by patch
|
Vulnerable, solved by patch
|
Not vulnerable
|
Not vulnerable
|
Install 6.6.1 SR03 |
Up to 2.16.0 |
C4DW 6.6 |
Vulnerable, solved by patch
|
Vulnerable, solved by patch
|
Not vulnerable
|
Not vulnerable
|
Install 6.6.0 SR06 |
Up to 2.16.0
|
C4DL 6.6 |
Vulnerable, solved by patch
|
Vulnerable, solved by patch
|
Not vulnerable
|
Not vulnerable
|
Install 6.6.0 SR09 |
Up to 2.16.0
|
Replicate 2021.11 |
Vulnerable, solved by patch
|
Vulnerable, solved by patch
|
Not vulnerable
|
Not vulnerable
|
Install version published 22 Dec 2021 |
Up to 2.16.0
|
Replicate 2021.5 |
Vulnerable, solved by patch
|
Vulnerable, solved by patch
|
Not vulnerable
|
Not vulnerable
|
Install 2021.5 SR 05 |
Up to 2.16.0
|
Replicate 7.0 |
Vulnerable, solved by patch
|
Vulnerable, solved by patch
|
Not vulnerable
|
Not vulnerable
|
Install 7.0.0 SR05 |
Up to 2.16.0
|
Replicate 6.6 |
Vulnerable, solved by patch
|
Vulnerable, solved by patch
|
Not vulnerable
|
Not vulnerable
|
Install 6.6.0 SR06 |
Up to 2.16.0
|
QEM 2021.11 |
Vulnerable, solved by patch
|
Vulnerable, solved by patch
|
Not vulnerable
|
Not vulnerable
|
Install version published 22 Dec 2021 |
Up to 2.16.0
|
QEM 2021.5 |
Vulnerable, solved by patch
|
Vulnerable, solved by patch
|
Not vulnerable
|
Not vulnerable
|
Install 2021.5 SR05 |
Up to 2.16.0
|
QEM 7.0 |
Vulnerable, solved by patch
|
Vulnerable, solved by patch
|
Not vulnerable
|
Not vulnerable
|
Install 7.0.0 SR05 |
Up to 2.16.0
|
QEM 6.6 |
Vulnerable, solved by patch
|
Vulnerable, solved by patch
|
Not vulnerable
|
Not vulnerable
|
Install 6.6.0 SR03 |
Up to 2.16.0
|
Catalog 4.12.0, 4.12.1 |
Vulnerable, solved by patch
|
Vulnerable, solved by patch
|
Vulnerable, solved by patch
|
Not Vulnerable, JDBC Appender not configured |
Install 4.12.2 |
Up to 2.17.0
|
Catalog 4.11.0, 4.11.1 |
Vulnerable, solved by patch
|
Vulnerable, solved by patch
|
Vulnerable, solved by patch
|
Not Vulnerable, JDBC Appender not configured |
Install 4.11.2 |
Up to 2.17.0
|
Catalog 4.10.0, 4.10.1, 4.10.2 |
Vulnerable, solved by patch
|
Vulnerable, solved by patch
|
Vulnerable, solved by patch
|
Not Vulnerable, JDBC Appender not configured |
Install 4.10.3 |
Up to 2.17.0
|
GeoAnalytics Server - 4.32.3 and 4.23.4 |
Vulnerable, solved by patch | Vulnerable, solved by patch | Vulnerable, solved by patch | Vulnerable, solved by patch |
Install 4.32.5 |
Up to 2.17.1 |
GeoAnalytics Server - 4.27.3 - 4.19.1 |
Vulnerable, solved by patch | Vulnerable, solved by patch | Vulnerable, solved by patch | Vulnerable, solved by patch |
Install 4.27.4 – 4.19.2 |
Up to 2.17.1 |
GeoAnalytics Plus - 5.31.1 and 5.31.2 |
Vulnerable, solved by patch |
Vulnerable, solved by patch |
Vulnerable, solved by patch |
Vulnerable, solved by patch |
Install 5.31.3 |
Up to 2.17.1 |
GeoAnalytics Plus - 5.30.1-5.29.4 |
Vulnerable, solved by patch |
Vulnerable, solved by patch |
Vulnerable, solved by patch |
Vulnerable, solved by patch |
Install 5.30.2 – 5.29.5 |
Up to 2.17.1 |
GeoAnalytics Plus - 5.28.2-5.27.5 |
Vulnerable, solved by patch |
Vulnerable, solved by patch |
Vulnerable, solved by patch |
Vulnerable, solved by patch |
Install 5.28.3 – 5.27.6 |
Up to 2.17.1 |
GeoAnalytics Plus - 5.26.5 |
Vulnerable, solved by patch |
Vulnerable, solved by patch |
Vulnerable, solved by patch |
Vulnerable, solved by patch |
Install 5.26.6 |
Up to 2.17.1 |
Please keep in mind that Qlik's on-premise (or client-managed) data integration products are intended to only be accessed on an internal network; therefore any potential impacts of CVE-2021-44228 should be mitigated by your internal network and access controls.
For information on supported versions, please visit the Product Support Lifecycle
Please subscribe to our Support Updates blog for continued updates.
Thank you for choosing Qlik,
Qlik Global Support
Change Log:
Dec. 13, 2021 12:15pm EST: Updated to specify which versions applied to not affected products; added changelog.
Dec. 13, 2021 3:15pm EST: Updated to specify which versions are affected with steps to mitigate and which products we are still evaluating.
Dec. 14, 2021 2:10pm EST: Added Qlik Catalog, Blendr, and Qlik Data Transfer to reviewed list. Added mitigation steps for Qlik Catalog.
Dec. 16, 2021 1:15pm EST: Updated Catalog version details in Patch schedule.
Dec. 20, 2021 1:15pm EST: Updated top post for status of CVE-2021-45105 and language around Catalog to be 'Hotfix' with full version patches in early Jan. 2022 in published.
Dec. 21, 2021 3:45pm EST: Updated Catalog to be 'Service Releases' with full version 2.17 published to downloads page.
This Techspert Talks session addresses:
00:00 - Intro
01:22 - Multi-Node Architecture Overview
04:10 - Common Performance Bottlenecks
05:38 - Using iPerf to measure connectivity
09:58 - Performance Monitor Article
10:30 - Setting up Performance Monitor
12:17 - Using Relog to visualize Performance
13:33 - Quick look at Grafana
14:45 - Qlik Scalability Tools
15:23 - Setting up a new scenario
18:26 - Look QSST Analyzer App
19:21 - Optimizing the Repository Service
21:38 - Adjusting the Page File
22:08 - The Sense Admin Playbook
23:10 - Optimizing PostgreSQL
24:29 - Log File Analyzer
27:06 - Summary
27:40 - Q&A: How to evaluate an application?
28:30 - Q&A: How to fix engine performance?
29:25 - Q&A: What about PostgreSQL 9.6 EOL?
30:07 - Q&A: Troubleshooting performance on Azure
31:22 - Q&A: Which nodes consume the most resources?
31:57 - Q&A: How to avoid working set breaches on engine nodes?
34:03 - Q&A: What do QRS log messages mean?
35:45 - Q&A: What about QlikView performance?
36:22 - Closing
Resources:
Qlik Help – Deployment examples
Using Windows Performance Monitor
PostgreSQL Fine Tuning starting point
Qlik Sense Shared Storage – Options and Requirements
Qlik Help – Performance and Scalability
Q&A:
Q: Recently I'm facing Qlik Sense proxy servers RAM overload, although there are 4 nodes and each node it is 16 CPUs and 256G. We have done App optimazation, like delete duplicate app, remove old data, remove unused field...but RAM status still not good, what is next to fix the performace issue? Apply more nodes?
A: Depends on what you mean by “RAM status still not good”. Qlik Data Analytics software will allocate and use memory within the limits established and does not release this memory unless the Low Memory Limit has been reached and cache needs cleaning. If RAM consumption remains high but no other effects, your system is working as expected.
Q: Similar to other database, do you think we need to perform finetuning, cleaning up bad records within PostgresQL , e.g.: once per year?
A: Periodic cleanup, especially in a rapidly changing environment, is certainly recommended. A good starting point: set your Deleted Entity Log table cleanup settings to appropriate values, and avoid clean-up tasks kicking in before user morning rampup.
Q: Does QliKView Server perform similarly to Qlik Sense?
A: It uses the same QIX Engine for data processing. There may be performance differences to the extent that QVW Documents and QVF Apps are completely different concepts.
Q: Is there a simple way (better than restarting QS services)to clean the cache, because chache around 90 % slows down QS?
A: It’s not quite as simple. Qlik Data Analytics software (and by extent, your users) benefits from keeping data cached as long as possible. This way, users consume pre-calculated results from memory instead of computing the same results over and over. Active cache clearing is detrimental to performance. High RAM usage is entirely normal, based Memory Limits defined in QMC. You should not expect Qlik Sense (or QlikView) to manage memory like regular software. If work stops, this does not mean memory consumption will go down, we expect to receive and serve more requests so we keep as much cached as possible. Long winded, but I hope this sets better expectations when considering “bad performance” without the full technical context.
Q: How do we know when CPU hits 100% what the culprit is, for example too many concurrent user loading apps/datasets or mutliple apps qvds reloading? can we see that anywhere?
A: We will provide links to the Log Analysis app I demoed during the webinar, this is a great place to start. Set Repository Performance logs to DEBUG for the QRS performance part, start analysing service resource usage trends and get to know your user patterns.
Q: Can there be repository connectivity issues with too many nodes?
A: You can only grow an environment so far before hitting physical limits to communication. As a best practice, with every new node added, a review of QRS Connection Pools and DB connectivity should be reviewed and increased where necessary. The most usual problem here is: you have added more nodes than connections are allowed to DB or Repository Services. This will almost guarantee communication issues.
Q: Does qlik scalability tools measure browser rendering time as well or just works on API layer?
A: Excellent question, it only evaluates at the API call/response level. For results that include browser-side rendering, other tools are required (LoadRunner, complex to set up, expert help needed).
Transcript:
Hello everyone and welcome to the November edition of Techspert Talks. I’m Troy Raney and I’ll be your host for today's session. Today's presentation is Optimizing Performance for Qlik Sense Enterprise with Mario Petre. Mario why don't you tell us a little bit about yourself?
Hi everyone; good to be here with everybody once again. My name is Mario Petre. I’m a Principal Technical Engineer in the Signature Support Team. I’ve been with Qlik over six years now and since the beginning, I’ve focused on Qlik Sense Enterprise backend services, architecture and performance from the very inception of the product. So, there's a lot of historical knowledge that I want to share with you and hopefully it's an interesting springboard to talk about performance.
Great! Today we're going to be talking about how a Qlik Sense site looks from an architectural perspective; what are things that should be measured when talking about performance; what to monitor after going live; how to troubleshoot and we'll certainly highlight plenty of resources and where to find more details at the end of the session. So Mario, we're talking about performance for Qlik Sense Enterprise on Windows; but ultimately, it's software on a machine.
That's right.
So, first we need to understand what Qlik Sense services are and what type of resources they use. Can you show us an overview from what a multi-node deployment looks like?
Sure. We can take a look at how a large Enterprise environment should be set up.
And I see all the services have been split out onto different nodes. Would you run through the acronyms quickly for us?
Yep. On a consumer node this is where your users come into the Hub. They will come in via the Qlik Proxy Service and consume applications via the Qlik Engine Service, that ultimately connects to the central node and everything else via the Qlik Repository Service.
Okay.
The green box is your front-end services. This is what end users tap into to consume data, but what facilitates that in the background is always the Repository Service.
And what's the difference between the consumer nodes on the top and the bottom?
These two nodes have a Proxy Service that balances against their own engines as well as other engines; while the consumer nodes at the bottom are only there for crunching data.
Okay.
And then we can take a look at the backend side of things. Resources are used to the extent that you're doing reloads, you will have an engine there as well as the primary role for the central node, active and failover which is: the Repository Service to coordinate communication between all the rest of the services. You can also have a separate node for development work. And ultimately we also expect the size of an environment to have a dedicated storage solution and a dedicated central Repository Database host either locally managed or in one of the cloud providers like AWS RDS for example.
Between the front-end and back-end services where's the majority of resource consumption, and what resources do they consume?
Most of the resource allocation here is going to go to the Engine Service; and that will consume CPU and RAM to the extent that it's allocated to the machine. And that is done at the QMC level where you set your Working Set Limits. But in the case of the top nodes, the Proxy Service also has a compute cost as it is managing session connectivity between the end user's browser and the Engine Service on that particular server. And the Repository Service is constantly checking the authorization and permissions. So, ultimately front-end servers make use of both front-end and back-end resources. But you also need to think about connectivity. There is the data streaming from storage to the node where it will be consumed and then loading from that into memory. And these are three different groups of resources: you have compute; you have memory, and you have network connectivity. And all three have to be well suited for the task for this environment to work well.
And we're talking about speed and performance like, how fast is a fast network? How can we even measure that?
So, we would start for any Enterprise environment, we would start at a 10 Gb network speed and ultimately, we expect response time of 4 MS between any node and the storage back end.
Okay. So, what are some common bottlenecks and issues that might arise?
All right. So, let's take a look at some at some examples. The Repository Service failing to communicate with rim nodes, with local services. I would immediately try to verify that the Repository Service connection pool and network connectivity is stable and connect. Let's say apps load very very slow for the first time. This is where network speed really comes into play. Another example: the QMC or the Hub takes a very very long time to load. And for that, we would have to look into the communication between the Repository Service and the Database, because that's where we store all of this metadata that we will try to calculate your permissions based on.
And could that also be related to the rules that people have set up and the number of users accessing?
Absolutely. You can hurt user experience by writing complex rules.
What about lag in the app itself?
This is now being consumed by the Engine Service on the consumer node. So, I would immediately try to evaluate resource consumption on that node, primarily CPU. Another great example for is high Page File usage. We prefer memory for working with applications. So, as soon as we try to cache and pull those results again from disk, performance we'll be suffering. And ultimately, the direct connectivity. How good and stable is the network between the end users machine and the Qlik Sense infrastructure? The symptom will be on the end user side, but the root cause is almost always (I mean 99.9% of the time) will be down to some effect in the environment.
So, to get an understanding of how well the machine works and establish that baseline, what can we use?
One simple way to measure this (CPU, RAM, disk network) is this neat little tool called iPerf.
Okay. And what are we looking at here?
This is my central node.
Okay. And iPerf will measure what exactly?
How fast data transfer is between this central node and a client machine or another server.
And where can people find iPerf?
Great question. iPerf.fr
And it's a free utility, right?
Absolutely.
So, I see you've already got it downloaded there.
Right. You will have to download this package, both on the server and the client machine that you want to test between. We'll run this “As Admin.” We call out the command; we specify that we want it to start in “server mode.” This will be listening for connection attempts.
Okay.
We can define the port. I will use the default one. Those ports can be found in Qlik Help.
Okay.
The format for the output in megabyte; and the interval for refresh 5 seconds is perfectly fine. And then, we want as much output as possible.
Okay.
First, we need to run this. There we go. It started listening. Now, I’m going to switch to my client machine.
So, iPerf is now listening on the server machine and you're moving over to the client machine to run iPerf from there?
Right. Now, we've opened a PowerShell window into iPerf on the client machine. Then we call the iPerf command. This time, we're going to tell it to launch in “Client Mode.” We need to specify an IP address for it to connect to.
And that's the IP address of the server machine?
Right. Again, the port; the format so that every output is exactly the same. And here, we want to update every second.
Okay.
And this is a super cool option: if we use the bytes flag, we can specify the size of the data payload. I’m going to go with a 1 Gb file (1024 Mb). You can also define parallel connections. I want 5 for now.
So, that's like 5 different users or parallel streams of activity of 1 Gb each between the server machine and this client machine?
Right. So, we actually want to measure how fast can we acquire data from the Qlik Sense server onto this client machine. We need to reverse the test. So, we can just run this now and see how fast it performs.
Okay. And did the server machine react the same way?
You can see that it produced output on the listening screen. This is where we started. And then it received and it's displaying its own statistics. And if you want to automate this, so that you have a spot check of throughput capacity between these servers, we need to use the log file option. And then we give it a path. So, I’m gonna say call this “iperf_serverside…” And launch it. And now, no output is produced.
Okay.
So, we can switch back to the client machine.
Okay. So, you're performing the exact same test again, just storing everything in a log file.
The test finished.
Okay. So, that can help you compare between what's being sent to what's being received, and see?
Absolutely. You can definitely have results presented in a way that is easy to compare across machines and across time. And initial results gave us a throughput per file of around 43.6, 46, thereabouts megabytes per second.
So, what about for an end user who's experiencing issues? Can you use iPerf to test the connectivity from a user machine on a different network?
Yep. So, in in the background we will have our server; it's running and waiting for connections. And let's run this connection now from the client machine. We will make sure that the IP address is correct; default port; the output format in megabytes; we want it refreshed every second; and we are transferring 1 Gb; and 5 parallel streams in reverse order. Meaning: we are copying from the server to the client machine. And let's run it.
Just seeing those numbers, they seem to be smaller than what we're seeing from the other machine.
Right. Indeed. I have some stuff in between to force it to talk a little slower. But this is one quick way to identify a spotty connection. This is where a baseline becomes gold; being able to demonstrate that your platform is experiencing a problem. And to quantify and to specify what that problem is going to reduce the time that you spend on outages and make you more effective as an admin.
Okay. That was network. How can admins monitor all the other performance aspects of a deployment? What tools are available and what metrics should they be measuring?
Right. That's a great question. The very basic is just Performance Monitor from Windows.
Okay.
The great thing about that is that we provide templates that also include metrics from our services.
Can you walk us through how to set up the Performance Monitor using one of those templates?
Sure thing. So, we're going to switch over first to the central node. So, the first thing that I want to do is create a folder where all of these logs will be stored.
Okay. So, that's a shared folder, good.
And this article is a great place to start. So, we can just download this attachment
So, now it's time to set up a Performance Monitor proper. We need to set up a new Data Collector Set.
Giving it a name.
And create from template. Browse for it, and finish.
Okay. So it’s got the template. That's our new one Qlik Sense Node Monitor, right?
Yep. You'll have multiple servers all writing to the same location. The first thing is to define the name of each individual collector; and you do that here. And you can also provide subdirectory for these connectors, and I suggest to have one per node name. I will call this Central Node.
Everything that comes from this node, yeah.
Correct. You can also select a schedule for when to start these. We have an article on how to make sure that Data Collectors are started when Windows starts. And then a stop condition.
Now, setting up monitors like this; could this actually impact performance negatively?
There is always an overhead to collecting and saving these metrics to a file. But the overhead is negligible.
Okay.
I am happy with how this is defined. Now, this static collector on one of the nodes is already set up. There is an option here that's called Data Manager. What's important here to define is to set a Minimum Free Disk. We could go with 10 Gb, for example; and you can also define a Resource Policy. The important bit is Minimum Free Disk. We want to Delete the Oldest (not the largest) in the Data Collector itself. We should change that directory and make sure that it points to our central location instead of locally; and we'll have to do this for every single node where we set this up.
Okay. So, that's that shared location?
Yep.
And you run the Data Collector there. And it creates a CSV file with all those performance counters. Cool.
So, here we have it now. If we just take a very quick look inside, we'll see a whole bunch of metrics. And if you want to visualize these really really quick, I can show you a quick tip that wasn't on the agenda but since we're here: on Windows, there is a built-in tool called Relog that is specifically designed for reformatting Performance Monitor counters. So, we can use Relog; we'll give it the name of this file; the format will be Binary; the output will be the same, but we'll rename it to BLG; and let's run it.
And now it created a copy in Binary format. Cool thing about this Troy is that: you can just double click on it.
It's already formatted to be a little more readable. Wow! Check that out.
There we go. Another quick tip: since we're here, first thing to do is: select everything and Scale; just to make sure that you're not missing any of the metrics. And this is also a great way to illustrate which service counters and system counters we collect. As you can see, there's quite a few here.
Okay. So, that Performance Monitor is, it's set up; it's running; we can see how it looks; and that is going to run all the time or just when we manually trigger it?
You can definitely configure it to run all the time, and that would be my advice. Its value is really realized as a baseline.
Yeah. Exactly. That was pretty cool seeing how that worked, using all the built-in utilities. And that Relog formatting for the Process Monitor was new to me. Are there any other tools you like to highlight?
Yeah. So, Performance Monitor is built-in. For larger Enterprises that may already be monitoring resources in a centralized way, there's no reason why you shouldn't expect to include the Sense resources into that live monitoring. And this could be done via different solutions out there. A few come to mind like: Grafana, Datadog, Butler SOS, for example from one of our own Qlik luminaries.
Can we take a quick look at Grafana? I’ve heard of that but never seen it.
Sure thing. This is my host monitor sheet. It's nowhere built to a corporate standard, but you can see here I’m looking at resources for the physical host where these VMs are running as well as the domain controller, and the main server where we've been running our CPU tests. And the great part about this is I have historical data as far back I believe as 90 days.
So, this is a cool tool that lets you like take a look at the performance and zoom-in and find the processes that might be causing some peaks or anything you want to investigate?
Right. Exactly. At least come up with a with a narrow time frame for you to look into the other tools and again narrow down the window of your investigation.
Yeah, that could be really helpful. Now I wanted to move on to the Qlik Sense Scalability Tools. Are those available on Qlik community?
That's right. Let me show you where to find them. You can see that we support all current versions including some of the older ones. You will have to go through and download the package and the applications used for analysis afterwards. There is a link over here. So, once the package is downloaded, you will get an installer. And the other cool thing about Scalability Tools is that you can use it to pre-warm the cache on certain applications since Qlik Sense Enterprise doesn't support application pre-loading.
Oh, cool. So, you can throttle up applications into memory like in QlikView. Can we take a look at it?
Yes, absolutely. This is the first thing that you'll see. We'll have to create a new connection. So, I’ll open a simple one that I’ve defined here and we can take a look at what's required just to establish a quick connection to your Qlik Sense site.
Okay, but basically the scenario that you're setting up will simulate activity on a Qlik Sense site to test its performance?
Exactly. You'll need to define your server hostname. This can be any of your proxy nodes in the environment. The virtual proxy prefix. I’ve defined it as Header and authentication method is going to be WebSocket.
Okay.
And then, if we want to look at how virtual users are going to be injected into the system, scroll over here to the user section. Just for this simple test, I’ve set it up for User List where you can define a static list of users like so: User Directory and UserName.
Okay. So, it's going to be taking a look at those 2 users you already predefined and their activity?
Exactly. We need to test the connection to make sure that we can connect to the system. Connection Successful. And then we can proceed with the scenario. This is very simple but let me show you how I got this far. So, the very first thing that we should do is to Open an App.
So, you're dragging away items?
Yep. I’m removing actions from this list. Let's try to change the sheet. A very simple action. And now we have four sheets, and we'll go ahead and select one of them.
Okay, so far, we have Opening the App and immediately changing to a sheet?
Yep. That's right. This will trigger actions in sequence exactly how you define them. It will not take into consideration things like Think Time. I will just define a static weight of 15 seconds, and then you can make selections.
But this is an amazing tool for being able to kind of stress test your system.
It's very very useful and it also provides a huge amount of detail within the results that it produces. One other quick tip: while defining your scenario, use easy to read labels, so that you can identify these in the Results Application. Let's assume that the scenario is defined. We will go ahead and add one last action and that is: to close, to Disconnect the app. We'll call this “OpenApp.” We'll call this “SheetChange.” Make sure you Save. The connection we've tested; we've defined our list of users. First, let's run the scenario. There is one more step to define and that is: to configure an Executor that will use this scenario file to launch a workload against our system. Create a New Sequence.
This is just where all these settings you're defining here are saved?
Correct. This is simply a mapping between the execution job that you're defining and which script scenario should be used. We'll go ahead and grab that. Save it again; and now we can start it. And now in the background if we were to monitor the Qlik Sense environment, we would see some amount of load coming in. We see that we had some kind of issue here: empty ObjectID. Apparently I left something in the script editor; but yeah, you kind of get the idea.
So, all this performance information would then be loaded into an app that is part of the package downloaded from Qlik community. How does that look?
So, here you will see each individual result set, and you can look at multiple-exerciser runs in the single application. Unfortunately, we don't have more than one here to showcase that, but you would see multiple-colored lines. There is metrics for a little bit of everything: your session ramp, your throughput by minute, you can change these.
CPU, RAM. This is great.
Exactly. CPU and RAM. These are these are not connected. We don't have those logs, but you would have them for a setup run on your system. These come from Performance Monitor as well, so you could just use those logs provided that the right template is in place. We see Response Time Distribution by Action, and these are the ones that I’ve asked you to change and name so that they're easy to understand.
Once your deployment is large enough to need to be multi-node and the default settings are no longer the best ones for you, what needs to be adjusted with a Repository Service to keep it from choking or to improve its performance?
That's a great question Troy. So, the first thing that we should take a look at is how the Repository communicates with the backend Database and vice versa. The connection pool for the Repository is always based on core count on the machine. And the best rule of thumb that we have to date is to take your core count on that machine, multiply it by 5, and that will be the max connection pool for the Repository Service for that node.
Can you show us where that connection pool setting can be changed?
Yes. So, we will go ahead and take a look. Here we are on the central node of my environment. You'll have to find your Qlik installation folder. We'll navigate to the Repository folder, Util, QlikSenseUtil, and we'll have to launch this “As Admin.”
Okay.
We'll have to come to the Connection String Editor. Make sure that the path matches. We just have to click on Read so that we get the contents of these files. And the setting that we are about to change is this one.
Okay. So, the maximum number of connections that the Repository can make?
Yes. And this is (again) for each node going towards the Repository Database.
Okay.
Again, this should be a factor of CPU cores multiplied by 5. If 90 is higher than that result, leave 90 in place. Never decrease it.
Okay, that's a good tip.
Right. I change this to 120. I have to Save. What I like to do here is: clear the screen and hit Read again; just to make sure that the changes have been persisted in the file.
Okay.
Once that's done, we can close this. We can restart the environment. We can get out of here.
So, there you adjusted the setting of how many connections this node can make to the QSR. Then assuming we do the same on all nodes, where do we adjust the total number of connections the Repository itself can receive?
That should be a sum of all of the connection strings from all of your nodes plus 110 extra for the central node. By default, here is where you can find that config file: Repository, PostgreSQL, and we'll have to open this one, PostgreSQL. Towards the end of the file…
Just going all the way to the bottom.
Here we have my Max Connections is 300.
Okay. One other setting you mentioned was the Page File and something to be considered. How would we make changes or adjust that setting?
Right. So, this is a Windows level setting that's found in Advanced System Settings; Advanced tab; Performance; and then again Advanced; and here we have Virtual Memory.
Okay.
We have to hit Change. We'll have to leave it at System Managed or understand exactly which values we are choosing and why. If you're not sure, the default should always be System Managed.
Now, I want to know what resources are available for Qlik Sense admins; specifically, what is the Admin Playbook?
It's a great starting place for understanding what duties and responsibilities one should be thinking about when administering a Qlik Sense site.
So, these are a bunch of tools built by Qlik to help analyze your deployment in different ways. I see weekly, monthly, quarterly, yearly, and a lot of different things are available there.
Yeah. So, we can take a look at Task Analysis, for example. The first time you run it, it's going to take about 20 minutes; thereafter about 10. The benefits: it shows you really in depth how to get to the data and then how to tweak the system to work better based on what you have.
Yeah, that's great.
Right? So, not only we put the tools in your hands, but also how to build these tools as you can here. See here, we have instructions on how to come up with these objects from scratch. An absolute must-read for every system admin out there.
Mario, we've talked about optimizing the Qlik Sense Repository Service, but not about Postgres? Do larger Enterprise level deployments affect its performance?
Sure. The thing about Postgres is again: we have to configure it by default for compatibility and not performance. So, it's another component that has to be targeted for optimization.
The detail there that anything over 1 Gb from Postgres might get paged - that sounds like it could certainly impact performance.
Right, because the buffer setting that we have by default is set to 1 Gb; and that means only 1 Gb of physical memory will be allocated to Postgres work. Now, we're talking about the large environment 500 to maybe 5,000 apps. We're talking 1000s of users with about 1000 of them peak concurrency per hour.
So, can we increase that Shared Buffer setting?
Absolutely. And in fact, I want to direct you to a really good article on performance optimization for PostgreSQL. And when we talk about fine-tuning, this article is where I’d like to get started. We talk about certain important factors like the Shared Buffers. So, this is what we define to 1 Gb by default. Their recommendation is to start with 1/4 of physical memory in your system. 1 Gb is definitely not one quarter of the machines out there. So, it needs tweaking.
And again these are settings to be changed on the machine that's hosting the Repository Database, right?
That's correct. That's correct.
Now, is there an app that you're aware of that would be good to kind of look at all these logs and analyze what's going on with the performance?
Absolutely. This is an application that was developed to better understand all of the transactions happening in a particular environment. It reads the log files collected with the Log Collector either via the tool or the QMC itself.
Okay.
It's not built for active monitoring, but rather to enhance troubleshooting.
Sure. So, basically it's good for looking at a short period of time to help troubleshooting?
Right. The Repository itself communicates over APIs between all the nodes and keeps track of all of the activities in the system; and these translate to API calls. If we want to focus on Repository API calls, we can start by looking at transactions.
Okay.
So, this will give us detail about cost. For example, per REST call or API call, we can see which endpoints take the most, duration per user, and this gives you an opportunity to start at a very high level and slowly drill in both in message types and timeframe. Another sheet is the Threads Endpoints and Users; and here you have performance information about how many worker-threads the Repository Service is able to start, what is the Repository CPU consumption, so you can easily identify one. For example, here just by discount, we can see that the preview privileges call for objects is called…
Yeah, a lot.
Over half a million times, right? And represents 73% of the CPU compute cost.
Wow, nice insights.
And then if we look here at the bottom, we can start evaluating time-based patterns and select specific time frames and go into greater detail.
So, I’m assuming this can also show resource consumption as well?
Right. CPU, memory in gigabytes and memory in percent. One neat trick is: to go to the QMC, look at how you've defined your Working Set Limits, and then pre-define reference lines in this chart. So, that it's easier to visualize when those thresholds are close to being reached or breached. And you do that by the add-ons reference lines, and you can define them like this.
That's just to sort of set that to match what's in the QMC?
Exactly.
Makes a powerful visualization. So, you can really map it.
Absolutely. And you can always drill down into specific points in time we can go and check the log details Engine Focus sheet; and this will allow us to browse over time, select things like errors and warnings alone, and then we will have all of the messages that are coming from the log files and what their sources.
Yeah. That's great to have it all kind of collected here in one app, that's great.
Indeed.
To summarize things, we've talked about to understand system performance, a baseline needs to be established. That involves setting up some monitoring. There are lots of options and tools available to do that; and it's really about understanding how the system performs so the measurement and comparisons are possible if things don't perform as expected.
And to begin to optimize as well.
Okay, great. Well now, it's time for Q&A. Please submit your questions through the Q&A panel on the left side of your On24 console. Mario, which question would you like to address first?
We have some great questions already. So, let's see - first one is: how can we evaluate our existing Qlik Sense applications?
This is not something that I’ve covered today, but it's a great question. We have an application on community called App Metadata Analyzer. You can import this into your system and use it to understand the memory footprint of applications and objects within those applications and how they scale inside your system. It will very quickly illustrate if you are shipping applications with extremely large data files (for example) that are almost never used. You can use that as a baseline for both optimizing local applications and also in your efforts to migrating to SaaS, if you feel like you don't want to bother with all of this Performance Monitoring and optimization, you can always choose to use our services and we'll take care of that for you.
Okay, next question.
So, the next question: worker schedulers errors and engine performance. How to fix?
I think I would definitely point you back to this Log Analysis application. Load that time frame where you think something bad happened, and see what kind of insights you can you can get by playing with the data, by exploring the data. And then narrow that search down if you find a specific pattern that seems like the product is misbehaving. Talk to Qlik support. We'll evaluate that with you and determine whether this is a defect or not or if it's just a quirk of how your system is set up. But that Sense Log Analysis app is a great place to start. And going back to the sheet that I showed: Repository and Engine metrics are all collected there. And these come from the performance logs that we already produce from Qlik Sense. You don't need to load any additional performance counters to get those details.
Okay.
All right. So, there is a question here about Postgres 9.6 and the fact that it's soon coming end of life. And I think this is a great moment to talk about this. Qlik Sense client-managed or Qlik Sense Enterprise for Windows supports Postgres 12.5 for new installations since the May release. If you have an existing installation, 9.6 will continue to be used; but there is an article on community on how to in-place upgrade that to 12.5 as a standalone component. So, you don't have to continue using 9.5 if your IT policy is complaining about the fact that it's soon coming to the end of life. As we say, we are aware of this fact; and in fact, we are shipping a new version as of the May 2021 release.
Oh, great.
So, here's an interesting question. If we have Qlik Sense in Azure on a virtual machine, why is the performance so sluggish? How do you fine-tune it? I guess first we need to understand what would you mean by sluggish? But the first thing that I want to point to is: different instance types. So, virtual machines in virtual private cloud providers are optimized for different workloads. And the same is true for AWS, Azure and Google Cloud platform. You will have virtual machines that are optimized for storage; ones that are optimized for compute tasks or application analytics; some that are optimized for memory. Make sure that you've chosen the right instance type and the right level of provisioned iOps for this application. If you feel that your performance is sluggish, start increasing those resources. Go one tier up and reevaluate until you find a an instance type that works for you. If you wish to have these results (let's say beforehand), you will have to consider using the Scalability Tools together with some of your applications against different instance types in Azure to determine which ones work best.
Just to kind of follow up on that question, if we're looking at that multi-node example from Qlik help, what nodes would you consider would require more resources?
Worker nodes in general. And those would be front and back-end.
So, a worker node is something with an engine, right?
Exactly. Something with an engine. It can either be front-facing together with a proxy to serve content, or back-end together with a scheduler a service to perform reload tasks. These will consume all the resources available on a given machine.
Okay.
And this is how the Qlik Sense engine is developed to work. And these resources are almost never released unless there is a reason for it, because us keeping those results cached is what makes the product fast.
Okay.
Oh, here's a great one about avoiding working set breaches on engine nodes. Question says: do you have any tips for avoiding the max memory threshold from the QIX engine? We didn't really cover this this aspect, but as you know the engine allows you to configure memory limits both for the lower and higher memory limit. Understanding how these work; I want to point you back to that QIXs engine white paper. The system will perform certain actions when these thresholds are reached. The first prompt that I have for you in this situation is: understand if these limits are far away from your physical memory limit. By default, Qlik Sense (I believe) uses 70 / 90 as the low and high working sets on a machine. With a lot of RAM, let's say 256 - half a terabyte of RAM, if you leave that low working set limit to 70 percent, that means that by default 30 of your physical RAM will not be used by Qlik Sense. So. always keep in mind that these percentages are based on physical amount of RAM available on the machine, and as soon as you deploy large machines (large: I’m talking 128 Gb and up) you have to redefine these parameters. Raise them up so that you utilize almost all of the resources available on the machine ,and you should be able to visualize that very very easily in the Log Analysis App by going to Engine Load sheet and inserting those reference lines based on where your current working sets are. Of course, the only way really to avoid a working set limit issue is to make sure that you have enough resources. And the system is configured to utilize those resources, so even if you still get them after raising the limit and allowing the - allowing the product to use as much RAM as it can without of course interfering with Windows operations (which is why you should never set these to like 99, 98, 99). Windows needs RAM to operate by itself, and if we let Qlik Sense to take all of it, it will break things. If you've done that and you're still having performance issues, that means you need more resources.
Yeah. It makes sense.
Oh, so here is another interesting question about understanding what certain Qlik Repository Service (QRS) log messages say. There is a question here that says: try to meet the recommendation of network and persistence the network latency should be less than 4 MS, but consistently in our logs we are seeing the QRS security management retrieved privileges in so many milliseconds. Could this be a Repository Service issue or where would you suggest we investigate first? This is an info level message that you are reporting. And it's simply telling you how long it took for the Repository Service to compute the result for that request. That doesn't mean that this is how long it took to talk to the Database and back, or how long it took for the request to reach from client to the server; only how long it took for the Repository Service to look up the metadata look up the security rules and then return a result based on that. And I would say this coming back in 384 milliseconds is rather quick. It depends on how you've defined these security rules. If these security rules are super simple and you are still getting slow responses, we would definitely have to look at resource consumption. But if you want to know how these calls affect resource consumption on the Repository and Postgres side, go back to that Log Analysis App. Raise your Repository performance logs in the QMC to Debug levels so that you get all of the performance information of how long each call took to execute. And try to establish some patterns. See if you have calls that take longer to execute than others; and where are those coming from any specific apps, any specific users? All of these answers come from drilling down into the data via that app that I demoed.
Okay Mario, we have time for one last question.
Right. And I think this is an excellent one to end. We talked a whole bunch here about Qlik Sense, but all of this also applies to QlikView environments. We are always looking at taking a step back and considering all of the resources that are playing in the ecosystem, not just the product itself. And the question asks: is QlikView Server performance similar to how it handles resources Qlik Sense? The answer is: yes. The engine is exactly the same in both products. If you read that white paper, you will understand how it works in both QlikView and Qlik Sense. And the things that you should do to prepare for performance and optimization are exactly the same in both products. Excellent question.
Great. Well, thank you very much Mario!
Oh, it's been my pleasure Troy. That was it for me today. Thank you all for participating. Thank you all for showing up. Thank you Troy for helping me through this very very complicated topic. It's been a blast as always. And to our customers and partners, looking forward to seeing your questions and deeper dives into logs and performance on community.
Okay, great! Thank you everyone! We hope you enjoyed this session. Thank you to Mario for presenting. We appreciate getting experts like Mario to share with us. Here's our legal disclaimer and thank you once again. Have a great rest of your day.
The below chart is the release date and end of support (EOS) date for all Qlik Alerting product releases. For more information, please see Qlik's On-Premise Products Release Management Policy.
Release | Release Date | EOS Date |
October 2021 | September 30, 2021 | September 20, 2023 |
May 2021 | May 11, 2021 | May 11, 2023 |
February 2021 | February 9, 2021 | February 9, 2023 |
November 2020 | November 10, 2020 | November 10, 2022 |
September 2020 | September 9, 2020 | September 9, 2022 |
Information about app and storage size for Qlik Cloud for Qlik Sense Enterprise SaaS and Qlik Sense Business can be found in Qlik Sense capacity.
Note: An app on disk is typically 4-6 times bigger in memory, although this is a rule of thumb and exceptions can and do occur, and an app within the "on-disk" size limit may balloon past the "in memory" limit.
If you are looking to compare other features between Qlik Sense Business and Qlik Sense Enterprise SaaS, see Compare product features and also Help> SaaS editions of Qlik Sense>Specifications.
Environment:
Reference: Pricing
During a Replicate Full Load, incoming changes occur while the Full load is in progress. What happens to those changes?
Qlik replicate starts the CDC thread before starting the Full load and all the changes show as cached events while the Full load is running. For example, 2 billion records are applied in the Full load state and Replicate again check at the end of the Full load for committed changes. If 100k changes are also committed before finished then it will also be applied in the full load otherwise the 100k changes will go to CDC.
Another example, for a full load takes 12 hours, then updates during the 12 hours full load window are also applied. Note- these two examples assume you have CDC enabled in the task.
If CDC is turned off during a full load task, then Replicate will not capture the cached events.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
The refresh token has expired due to inactivity
or
Information provided on this defect is given as is at the time of documenting. For up to date information, please review the most recent Release Notes, or contact support with the ID QCWPI-2306 for reference.
This article gives an overview of the Qlik Cloud Catalog API blocks in the Qlik Cloud Services connector in Qlik Application Automation. Please check this article for a basic introduction to the Qlik Cloud Services connector.
Using these blocks, you can explore the Catalog capabilities in the Qlik Cloud connector.
To select the Catalog API blocks, click on the Qlik Cloud Services connector from the left side menu, then click on the Catalog filter.
As you can see, the connector consists of CRUD support for the following Catalog entities:
There is also metadata support for these entities, which can be accessed using the Patch data store, Patch data asset, and Patch data set blocks.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
Getting data out of Qlik Sense Enterprise SaaS, and distributing it to different users in formatted Excel, has been a manual task until now.
Thanks to the release of Qlik Application Automation, it is now possible to automate this workflow by leveraging the connectors for Office 365 - specifically Microsoft SharePoint and Microsoft Excel.
Here I share two example QAA workspaces that you can use and modify to suit your requirements.
Video:
Considerations
Example 1 - Scheduled Reports
Example 2 - On-Demand Reports
Note - These instructions assume you have already created connections as required in Example 1.
This On-Demand Report Automation can be used across multiple apps and tables. Simply copy the extension object between apps & sheets, and update the Object ID (Measure 3) for each instance.
Environment
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
For detailed release notes, please visit this post: QlikView - May 2022 IR
Key highlights from QlikView's May 2022 Initial Release:
All Release Notes can be found on the Release Notes page here in the Qlik Community.
Be sure to subscribe to the Qlik Support Updates Blog by clicking the green Subscribe button to stay up-to-date with the latest releases. Please give this post a like if you found it helpful and let us know if you have any questions using the comments below.
Thank you for choosing Qlik!
Kind Regards,
Qlik Support
NPrinting has connections to both QlikView and Qlik Sense.
All connections to QlikSense are working fine.
Any publish tasks or metadata reloads with QlikView are failing.
Windows Event Logs revealed that qv.exe was crashing every single time that NPrinting tried running it, but gave no indication as to why it was crashing. I ran fine when launched manually.
Anti-Virus software was running on the server and was stopping qv.exe from running, verify that these folders are excluded from anti-virus scanning:
C:\ProgramData\QlikTech
C:\Program Files (x86)\QlikView
C:\Program Files\QlikView
After excluding these, QlikView-related tasks worked correctly.
Qlik NPrinting 17+
QlikView 12+
The below chart is the release date and end of support (EOS) date for all Qlik Replicate product releases. For more information, please see Qlik's On-Premise Products Release Management Policy.
Version | Release Date | End of Support Date |
Qlik Replicate November 2021 | November 8, 2021 | November 8, 2023 |
Qlik Replicate May 2021 | May 11, 2021 | May 11, 2023 |
Qlik Replicate November 2020 | November 10, 2020 | November 30, 2022 |
Qlik Replicate April 2020 (6.6) | April 16, 2020 | April 30, 2022 |
Qlik Replicate 6.5 | November 14, 2019 | November 30, 2021 |
Qlik Replicate 6.4 | April 1, 2019 | April 14, 2021 |
Qlik Replicate 5.5 | August 1, 2017 | November 30, 2020 |
With Salesforce Jobs API, you can insert, update, upsert, or delete large data sets. Prepare a comma-separated value (CSV) file representation of the data you want to upload, create a job, upload job data, and let the Qlik Application Automation handle these with the Salesforce API.
Here are the steps to use the Upload Jobs APIs:
How to filter records by unique identifier "_id" in MongoDB source?
In MongoDB each document has a unique identifier field named "_id", it's generated automatically (by default). Replicate will replicate this field to target side. The field can be used to filter records.
By default the "_id" field was composed by 12 bytes. Below sample shows how to use the first 4 bytes to filter records.
position | detailed explanation |
626509d4582db56b600232fb | |
4 bytes | Representing the seconds since the Unix epoch |
5 bytes | random value |
3 bytes | counter, starting with a random value |
The 4 bytes are Hexadecimal value of Unix epoch seconds, it can be converted to meaningful time by the page:
Hex | Dec | https://www.epochconverter.com/ |
626509d4 | 1650788820 | Sunday, April 24, 2022 4:27:00 PM GMT+08:00 |
This article gives an overview of the available blocks in the Qlik Catalog connector in Qlik Application Automation. The Qlik Catalog connector is built to enable Qlik SaaS users to utilize entity, fields, and lineage capabilities in the automation.
Qlik Catalog connector uses the Cross-Site Request Forgery (CSRF) token as an authentication token and session ID to authenticate to your Catalog on-prem instance. Ensure the Base URL is correctly specified while connecting to the Qlik Catalog.
Working with Qlik Catalog blocks
All the blocks for Qlik Catalog use the REST APIs. The following objects have easy-to-use blocks:
The primary use case of this connector is to leverage the Entity, Fields, and Lineage blocks. Users can list out the newly updated Entities and Fields using the "List changed entities incrementally" and "List changed fields blocks respectively. This helps to retrieve the updated entities and fields on a regular basis. On the other hand, Lineage blocks are used to retrieve the Lineage graphs and nodes based on specified search criteria.
Furthermore, it is possible to work with other objects like Import, One-click-publish, Profile statistics, and so forth using the Raw API blocks.
Below is a basic example of how to use the Raw API blocks. This example demonstrates how to retrieve available field metrics using a Raw API list request. As per the API documentation, the action path of this API call is '/profile/v1/available'. Specify this in the input parameter of the block and run the automation for results.
This document explains the steps to configure the Qlik Sense Monitoring Applications (License Monitor, Operations Monitor, Etc) to use Certificate Authentication instead of default Windows Authentication.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
1. Export Qlik Certificates via the QMC
2. Navigate to the path listed to obtain the exported certificates:
3. Copy the folder that was created and paste it into Engine folder of ALL nodes that will be used to reload the Monitoring Applications
4. Create a security rule that allows a user to access all Data Connections within the HUB
4. Modify all REST Data Connections that are used by the Monitoring Apps. (e.g. monitor_apps_REST_app, monitor_apps_REST_appobject, monitor_apps_REST_xxxxxx, etc)
5. Another way to update the rest of the data connections would be to modify them via the QMC
6. Once all of the Data Connections have been modified, then you can attempt a Reload via the QMC of one of the Monitoring Applications (e.g: License Monitor)
Attached below is a zip file that includes a PowerShell Script that can preform all of the steps above. You can down and extract the script to your Central node.
(Nothing is deleted by running this script only renamed. If you would like to revert back prior to running the script, just swap the Data connections back in the QMC (they have -old appended to them)
1. Run the script as your Qlik Sense Service Account on the Central Node
2. Old Data connections used by the Monitoring Apps will be renamed: Example - monitor_apps_REST_app --> monitor_apps_REST_app-old
3. The Operations Monitor and License Monitor will be imported to recreate the data connections named Operations Monitor-New & License Monitor-New
4. The Data Connections will be modified to use certificate authorization instead of Windows Authentication (This will create a password protected Certificate at [ProgramData]\Qlik\Sense\Engine\Certificates using the FQDN of the Central Node)
5. Additional considerations: In multi-node environments where the central node does not perform reloads, the certificate generated will have to be moved to the corresponding folders on the other nodes: By Default, [ProgramData]\Qlik\Sense\Engine\Certificates\Central Node Name (keep the folder name the same)
While connecting to a Microsoft OneDrive account is made easy by the oAuth2 connection flow, I will be demonstrating in this article how to create a simple file in your drive and write some information in it.
First, we'll go over the connection. After dragging and dropping a block from the Cloud storage menu on the left side of the UI you will find out you will need to select a connection for the block to connect to OneDrive/Sharepoint:
After selecting the desired connection and going through the easy steps of logging in to your account on the platform rest should be easy. Qlik Application Automation never deals with your username/password for these accounts since you are providing the platform in question directly with the parameters.
Now that we are set up it's good to remind you of the file overwrite flow protocol, in case you want to create a new file. So before actually creating a new file at the desired path, we need to check its existence with the following blocks:
After file existence check and file creation, we are now free to write to that file whatever inputs we desire. Once finished we need to save and close the file:
As a nice exercise we can check if the contents of the file are exactly as we expect so we can do a quick verification of that by introducing the following blocks:
If you are interested in the input parameters you can find them in the attached JSON file of the quick example provided.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
This article will cover how to connect a Datasource to your S/FTP server. While both connections offer you mainly the same input parameters on connecting you can observe a few differences:
|
|
While Ftp connections just ask for a username and password to connect to the indicated host, Sftp connections give you a better configuration of the input parameters depending on which type of connection the server administrator set up for you.
One of the Sftp options is to declare the port you want to connect to. While the default is port 22 and you don't need to fill the input parameter in, some connections might need other values.
Please make sure for both connections when filling in the host parameter to remove any extra characters in front of the name like "https://" you get from copy paste a link, and also any ending "/" characters.
As for the private_key parameter, if your connection requires one, make sure to copy paste the whole string including the "-----BEGIN OPENSSH PRIVATE KEY-----" and "-----END OPENSSH PRIVATE KEY-----" found in the key file.
WARNING: as per the time that this article is being written at, we don't support the generated key passphrase functionality yet so as a workaround you can request your server administrator to set up the SFTP server using generated keys without passphrases. |
Also, one quirk you might encounter when trying to make use of the native cloud storage blocks functionality in conjunction with Ftp/Sftp servers is that when trying to manipulate files, make sure the user you connected to with the server has the required rights in order to manipulate the file accordingly (read/write).
One other important issue you may encounter when trying to connect to one such server is the need to whitelist certain IP addresses in order to allow Qlik Application Automation to have access to that server.
|
|
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
[Paragraph format. Link to related content, such as articles, community posts, Help articles, etc...]
If you are here then you are in need of help in using the file transfer blocks provided natively in the Qlik Application Automations. This article will cover the usage of the blocks as well as some design flows to be applied when using them. At the end of the article, you will also find a list of links that can take you to more in-depth platform connections and atypical usages.
I will be covering the basic blocks you can find under the left-hand option of Cloud storage in order:
At the moment of this article being written these are the following options that can be found:
Warning: this block cannot be used if a file with the same filename exists at that same location. In order to do that you will need to use the overwrite flow rules at the end of this article. |
Now that we went over the functionality of the blocks I will present an issue most of you will encounter. You set up a Qlik Application Automation and at the end of the automation, you want to save your result in a target file that might or might not be at the location suggested. Since you cannot create a new file if a previous one exists, you will need to apply the following to your automation:
You will then have to check if the file exists initially, and based on the response from that block either simply create a new file or delete the old file and create a new one to write data to.
And last but not least, this is a list of supported platforms for the native blocks as well as links to how to connect to them as well as more quirky details they have:
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
If you are using Qlik Sense SAAS you might notice that there is no Qlik IBM DB2 connector (within the Qlik ODBC connector package).
This is working as designed because this connector is no longer supported, please check out the following link for further information: Create an IBM DB2 connection
Alternatively you can still upload the data from IBM DB2 via Qlik DataTransfer https://www.qlik.com/us/products/qlik-data-transfer if you install the IBM DB2 ODBC driver on-premise.
Table of Contents
The following release notes cover the versions of [Product Name] released in [version]. Resolved defects and limitations for Qlik Sense Enterprise on Cloud Services are also listed.
Please refer to the What’s new sections of the online help for information about the new and updated features of the Qlik Sense Enterprise on Windows May 2022 release:
Distribute only changes of an app to cloud
This feature allows distributing only the changes of an app its metadata (like name and app objects) and\or actual data. By comparing data last loaded and last distribution time-stamps, the Application Distribution service (ADS) decides whether app should be distributed with or without data. In the case of no data change, this significantly reduces network traffic between Qlik Sense and Qlik Cloud Services.
This feature is enabled by default and controlled via the DistributeWithoutData feature flag set in C:\Program Files\Qlik\Sense\AppDistributionService\appsettings.json. To disable, set the flag to false and then restart the Qlik Sense Service Dispatcher service. Make sure it is done on all nodes where ADS is configured to distribute apps to the cloud:
(...) "FeatureFlags": {
"ChunkedUploads": true, "DistributeWithoutData": true, "ExportScope": "published"
} }
See also the complementary feature "Distribution task for distributing apps from Qlik Sense to Qlik Cloud Services".
Favorites in the Hub
Hub users can now mark their private app or any published app as a favorite. The marked apps will appear in a new app space in the Hub: Favorites. The Favorites section is visible when at least one app is marked as favorite. An application is marked/unmarked as favorite via an icon shown when hovering over the app thumbnail or in the Actions column, depending on the view.
A new endpoint is introduced with the following methods, affecting the user making the call, identified via the X-Qlik-User header.:
GET /qrs/user/favorites
PUT /qrs/user/favorites/<AppID>
DELETE /qrs/user/favorites/<AppID>
This feature is enabled by default and controlled via the HUB_FAVORITES feature flag set in C:\Program Files\Qlik\Sense\CapabilityService\capabilities.json. To disable, set the flag to false and then follow restart the Qlik Sense Service Dispatcher service:
"contentHash":"2ae4a99c9f17ab76e1eeb27bc4211874","originalClassName":"FeatureToggle","flag":"HUB_FAVORITES","enabled":false}
Update of extensions in Qlik Management Console
Qlik Sense now offers an ability to update extensions without the need of deleting the existing one. In the Qlik Management Console, upon an attempt to upload an extension with a name that already exists in the system, the user is prompted for confirmation to replace it with the new one. Once confirmed, the existing files are overridden with the new ones, while keeping the GUID of the extension intact. As a result, this does not affect any associated items, for example custom security rules. On an API level this is accomplished by adding the replace=true parameter to the request, as in the following example:
/qrs/extension/upload?privileges=true&pwd=&replace=true
This feature is enabled by default and controlled via the QmcReplaceExtensions feature flag set in C:\Program Files\Qlik\Sense\CapabilityService\capabilities.json. To disable, set the flag to false and then restart the Qlik Sense Service Dispatcher service:
{"contentHash":"2ae4a99c9f17ab76e1eeb27bc4211874","originalClassName":"FeatureToggle","flag":"QmcReplaceExtensions","enabled":false}
Empty streams not displayed in the hub
In the hub, streams with no apps empty or, based on the existing security rules, evaluated to not show apps for the current user - are no longer displayed. Upon publishing an app, moving an app from another stream, or deleting an app, the list of streams is dynamically updated. The streams either appears or are hidden. Changes outside of the hub, for example in the Qlik Management Console, do not trigger updates to the list of stream in the hub.
This feature is enabled by default and controlled via the HUB_HIDE_EMPTY_STREAMS feature flag set in C:\Program Files\Qlik\Sense\CapabilityService\capabilities.json. To disable, set the flag to false and then restart of the Qlik Sense Service Dispatcher service:
{"contentHash":"2ae4a99c9f17ab76e1eeb27bc4211874","originalClassName":"FeatureToggle","flag":"HUB_HIDE_EMPTY_STREAMS","enabled":false}
Distribution task for distributing apps from Qlik Sense to Qlik Cloud Services
A new type of task is now available in the Qlik Management Console: Distribution task. The distribution task allows scheduling the distributions of apps in the same way as reloading apps via Reload tasks.
This feature is enabled by default and controlled via the QMC_DISTRIBUTION_TASK feature flag set in C:\Program Files\Qlik\Sense\CapabilityService\capabilities.json. To disable, set the flag to false and then restart the Qlik Sense Service Dispatcher service:
"contentHash":"2ae4a99c9f17ab76e1eeb27bc4211874","originalClassName":"FeatureToggle","flag":"QMC_DISTRIBUTION_TASK","enabled":false}
In addition, this feature comes with the ability of disabling the legacy behavior of automatically triggered distributions of apps at the change of metadata or actual data, based on the existing distribution policies. This is done by setting the following flag in C:\Program Files\Qlik\Sense\Repository\Repository.exe.config to true, and then restarting the Qlik Sense Repository service:
<add key="DisableAppChangeDistribution" value="true" />
See also the complementary feature "Distribute only changes of an app to cloud".
Qlik Sense Repository service scans for script tags in the XML files uploaded to AppContent or ContentLibrary library types
For customers who consciously allow-list XML files and allow their upload as data sources or part of the content libraries, an additional layer of security has been introduced. The Repository service scans through the uploaded XML file for potential Cross-Site Scripting (XSS) vulnerabilities and if any are found, blocks further upload. System administrators in the Qlik Management Console can force the upload despite the presented warning. App users will no longer be able to complete the upload and will be instructed to contact their system administrator to perform it for them.
This feature is enabled by default and controlled via the ScanXmlFileForScripts feature flag set in C:\Program Files\Qlik\Sense\Repository\Repository.exe.config. To disable, set the flag to false and then restart the Qlik Sense Repository service:
<add key="ScanXmlFileForScripts" value="false" />
May 2022 Patch 1
Key | Title | Description |
QB-10174 | Issue with notification setup when several websockets have opened the same app | When several websockets open the same app event, registration of notifications like publish and unpublish is set up and torn down appropriately. |
QB-10171 | Reload script is executed successfully but app save fails | Added a retry mechanism where a locked transaction file would cause Engine to fail when saving the app. Affected areas: - Autosave - API: DoSave - API: DoSaveEx Note that when the saving sometimes takes a bit longer to complete, this could be the retry mechanism waiting for the file to get unlocked for writing (10 ms for each retry). |
IM-131 | Add retry of CopyFileCollection when performing DoSave | Improvement for environmental issue when DoSave might fail. The failure could be seen in the Engine System log as '*Could not copy collection* <fileshare path to app> (genericException)' AppSerializer: SaveApp_internal caught extended exception 9010: Unknown error. Added a retry mechanism that can be controlled through the settings.ini file: CopyCollectionRetry=5 The default value is currently set to five retries. This setting can be turned off by setting it to 0. |
May 2022
Key | Title | Description |
QB-5766 | Qlik Cloud: Sheet with dynamic view is "published" when trying to edit the sheet | Fixed an issue with editing a sheet with dynamic views. You would get a message that the sheet was published and that you had to duplicate it to edit. |
QB-5820 | Qlik Sense could stop responding during upgrade after decoupling bundled PostgreSQL database instance | After reconfiguring a default Qlik Sense installation to use a dedicated PostgreSQL database instance, the upgrade could fail if the default instance was not uninstalled. Qlik Sense installer will no longer allow an upgrade unless that condition is met. |
QB-6274 | Qlik Sense: Incorrect sorting when a chart is rendered the first time | A chart will be correctly sorted when it has a sorting expression that contains a set expression with an auto-field the first time it is rendered. |
QB-6534 | Allow user to use letter 'A' in format expressions | The letter 'A' used to be a reserved syntax to force an SI abbreviation. Now it is disabled. |
QB-7663 | Qlik Sense app stops responding when importing thousands of columns | Fixed an issue whith imports of more than 16,000 columns in a table. On a Windows server, the app would stop responding, and on Linux the whole engine process would terminate. This could happen during reload, script editing dialogs, or when opening the app. |
QB-7766 | ApplyPatches() changes are not persisted on disconnect from app | Qlik Sense: ApplyPatches() changes are now persisted on disconnect from an app after save. |
QB-7782 | Host header not validated when Qlik Sense hostname is added in 'Host allow list' in virtual proxy settings | Improved the HTTP Host header validation method for permitted domains as per 'Host allow list' performed by Qlik Virtual Proxy. |
QB-7903 | ODAG indication green light not consistent | In case of having multiple ODAG links in an app there could be times when the constraints check (green light indicator) would use an expression from a different ODAG link. It will now use the correct expression. |
QB-7907 | AMI instance BYOL broken on installation level - qliksenserepository password not updated for some microservices | The error "The system cannot find the file specified" was thrown when trying to configure microservices for communication with postgres. The Configure-Service.ps1 scripts have been updated for every microservice to point to the "...\PostgreSQL\12.5" location. |
QB-8085 | Qlik Sense: The whole group isn't hidden when the parent in suppressed in a pivot table | In a Qlik Sense pivot table, if a dimension value is suppressed for being zero, the whole subgroup is now also suppressed. |
QB-8102 | The contrast ration in app description in the Hub not compliant with WACG 2 guidelines | When displaying apps in list view, the description section had contrast ratio below WACG 2 guidelines. The subtext color has been changed resulting in contrast ratio increase to 5.17:1. |
QB-8144 | Prepare statement failing in Qlik Sense | The Prepare and Execute statements are no longer case sensitive. |
QB-8218 | Smart search doesn't work when used in session apps | Resolved an issue where smart search did not work in session apps or in a mashup. |
QB-8381 | Fix 'Apply changes' message on Container object | The 'Apply changes' message that appears on the property panel when switching between tabs in the Container object has been removed. |
QB-8513 | Qlik Sense: Error when exporting data to Excel due to date format | The engine can have an arbitrary number of decimal digits of whole seconds. When exporting to Excel this is limited to three decimals, the upper limit that Excel can handle, and caused errors. This has been fixed by updating the millisecond format for Excel export. |
QB-8527 | Qlik Sense: Third-party software shows jQuery UI | Components for jQuery UI have been removed from the Third-party software section in the About dialog in the hub because it is no longer shipped with the product. |
QB-8541 | Multi KPI chart style is changed unintentionally | Multi KPI chart is now reverted to the original style. |
QB-8638 | Unauthorised access to QMC sections possible by changing response | By intercepting and reusing the response to a /qrs/SystemRule/security/evaluatetransientresources request, it was possible for an unprivileged user to unlock sections in the Qlik Management Console (QMC), while still being restricted to information within them. The issue has been fixed by hashing values returned from the evaluattrancientresrouces endpoint from the QmcSection access check, which in turn prevents unauthorized access to QMC. |
QB-8689 | Section Access on key field can cause a crash | When a key field is reduced by Section Access, the data traverse cannot correctly handle NULL values if the reduced field had NULL values before reduction, but not after. In rare cases, it could cause the engine to crash. |
QB-8713 | Enable setting for usage of measure names in expression | The setting that enables usage of measure names in expression was previously disabled by default, but it will now be enabled by default. |
QB-8742 | 500 Internal Server Error when accessing hub via URI with special characters | When an unauthenticated user would access hub via URI containing special characters, for example Japanese, the proxy service would throw a 500 error as a result of incorrect string parsing. This issue has been fixed by changing the way the proxy service handles the target URI. |
QB-8826 | Qlik Sense Repository Service allows more than one ServiceCluster | Fixed an issue where the Qlik Sense Repository Service would allow, via API call, to create more than one ServiceCluster entity. This might have caused issues with ServiceCluster settings. Now, the error "400 Bad Request - Only one service cluster per deployment allowed" is thrown. |
QB-8851 | Can't use dashes (-) in the database name using the MySQL connector wizard | The issue has been fixed and dashes can now be used in database names and table names. |
QB-8859 | Logging level issue | Fixed an error in a logging mechanism where a connector produced log strings of log level DEBUG when the actual setting was log level INFO. |
QB-8873 | Forms Authentication with FQDN creates new user | When a user authenticated to Qlik Sense with Forms Authentication and FQDN as user directory, it was recognized as a different user, compared to when using a simple domain name. This has been fixed by using user impersonation, which prevents the creation of a new user. |
QB-8878 | Y-axis of line chart used in container disappears | Fixed an issue in Qlik Sense where the y-axis of line charts sometimes disappeared when used in a container. |
QB-8893 | Issue with nine items in filter pane | A filter pane with exactly nine items now has a scrollbar. This makes the last item readable and reachable. |
QB-8909 | Fiilter pane is hidden when another chart is full screen | Qlik Sense now applies the correct cascading style sheet to the filter pane when you make another chart full screen. |
QB-8917 | Dimension labels cut off in combo chart | Fixed an issue where the dimension labels were cut off incorrectly for certain chart sizes in combo charts. |
QB-8992 | Use safe ciphers by default | Fixed an issue with unsafe ciphers. The unsafe ciphers have been removed and a list list of supported predefined ciphers is used. If you want to use the unsafe ciphers, you need to provide a list of them as mentioned in this knowledge article: https://help.qlik.com/en-US/nprinting/May2021/Content/NPrinting/DeployingQVNprinting/TLS-cipher-suites.htm, under the `--cipher-suites` param |
QB-9029 | Fix log messages for QvRestConnector | Fixed logging of REST connector. The following message is no longer logged: "REST The requested service 'Qlik.Connectors.SDK.Common.Encryption.IEncryptionService' has not been registered." |
QB-9041 | Bars, lines, and markers in combo chart not correctly aligned | Fixed an issue where bars, lines, and markers in combo charts were left-aligned instead of center-aligned in certain scenarios . |
QB-9073 | Line marker in combo chart displayed incorrectly | Fixed an issue where line markers in combo charts were sometimes not filled. |
QB-9080 | Make sure table doesn't scroll horizontally when confirming selections | In touch mode, the focus was incorrectly set to a cell at the start of the table. Now the focus is not set at all in touch mode, because focus is not applicable on touch devices. |
QB-9083 | Filter pane selection is automatically selected when expanded | The filterpane will not automatically apply selections and clear the search box when double-clicking. |
QB-9137 | GeoAnalytics map chart not working as expected | Fixed an issue with map legend when using drilldown dimensions. Map legend visibility will now reflect if the drilldown level is visible or not. |
QB-9172 | The exportPDF method doesn't render all table columns | To include all columns in exportPDF, the objectSize can be used to specify the size that the object needs to be rendered. |
QB-9228 | App is not showing in Insight Advisor chat | For apps with large data models, scraping calls might take a long time to complete. Fixed the issue by making the scraping timeout of the nl-app-search HTTP request configurable. The default value for the timeout (two minutes) can now be increased by setting the scraping-request-timeout parameter in the service configuration. |
QB-9309 | URL link doesn't work in formatted export | Qlik Sense: Fixed issue with URL link when exporting straight tables to Excel. |
QB-9401 | .NET API Field.Select and Field.ToggleSelect don't work with minus sign | Fixed the sync logic for Field::Select and GenericObject::SearchListObjectFor. |
QB-9407 | PDF downloaded as blank page | Fixed printing render of third-party extension. |
QB-9454 | Qlik Sense: Issue with information disclosure of internal FQDN and ports | Before, in case a URL was not found, a 404 HTTP error was returned along with the details of the internal URL to be used. This issue has now been fixed by removing the internal URL details from the returned payload and providing a generic "Content not found." message. |
QB-9484 | Qlik Sense Desktop error at login | Fixed an issue with Qlik Sense Desktop 2022 authentication against Qlik Sense Enterprise on Windows (QSEoW). The authentication would fail with the error "Failed to open selected hub" when QSEoW had invalid certificates and the user decided to accept the invalid certificate. |
QB-9485 | Incorrect layout of Multi KPI objects in Qlik Sense | Fixed the layout of the objects in the KPI. |
QB-9533 | Qlik Sense: Export problem when Totals label is undefined | Fixed problem with export in cases where the Totals label was undefined by falling back to a default label. |
QB-9570 | Qlik Sense: Improve error messages for script save failures | The following improvements have been done: - The timestamp for saved changes is more visible. - A dialog makes it clear if the script saving failed. |
QB-9589 | Can't use dashes (-) in the database or table name using the MySQL connector wizard | The issue has been fixed and dashes can now be used in database names and table names. |
QB-9650 | Update Node.js | Updated Node.js to address third-party issue CVE-2022-0778. For more information, see https://nodejs.org/en/blog/vulnerability/mar-2022-security-releases/. Details: May 2022 IR - updated to v14.19.1 February 2022 Patch 3 - updated to v14.19.1 November 2021 Patch 8 - updated to v12.22.11 August 2021 Patch 10 - updated to v12.22.11 May 2021 Patch 15 - updated to v12.22.11 February 2021 Patch X - updated to v12.22.11 |
QB-9717 | Qlik Sense: Bar chart showing incorrect numerical abbreviation | Fixed an issue where bar charts sometimes showed incorrect numerical abbreviation. |
QB-9789 | Qlik Sense: Large number of user groups coming from SAML\OIDC authentication caused slow performance in the hub and QMC | Some requests to the Qlik Sense Repository service would unnecessarily include persisting user attributes twice. This would impact performance across the Qlik Sense product. The issue has been fixed and the X-Qlik-ExtendedUserInfo header is now only included in an initial request, when the repository service compares the existing attributes with the ones coming from the Identify Provider. |
QB-9795 | Information disclosure of internal FQDN and ports | In cases where a URL was not found, a 404 HTTP error was returned along with the details of the internal URL to be used. This issue has been fixed by removing the internal URL details from the returned payload and providing a generic "Content not found." message. |
QB-9827 | Qlik MySQL connector data type conversion issue | Fixed the reading of bit(1) data type for MySQL. It is now converted the same way as for bit columns with size larger than 1. |
The following issues and limitations were identified at release time. The list is not comprehensive; it does however list all known major issues and limitations.
Workaround: turn 'touch screen mode' off from the navigation menu.
Exporting a story to PowerPoint limitations
ODBC connector: If the user name on the Microsoft Windows system running Qlik Sense Desktop contains letters that are not English alphanumeric characters, database connectors in the ODBC Connector Package do not work properly. Workaround: Change the Windows system locale to the match the character set that contains the characters used in the user name. For example, if the System locale on the system running Qlik Sense Desktop is set to English and a user name contains Swedish characters, the System locale setting must be changed to Swedish for the ODBC connector to work properly
The Qlik Salesforce Connector does not support PK chunking on sharing objects. PK chunking is supported only on parent objects.
Please refer to the online help for information about the requirements for Qlik Sense:
System requirements for Qlik Sense
https://community.qlik.com/t5/Downloads/tkb-p/Downloads
About Qlik
Qlik’s vision is a data-literate world, where everyone can use data and analytics to improve decision-making and solve their most challenging problems. A private SaaS company, Qlik offers an Active Intelligence platform, delivering end-to-end, real-time data integration and analytics cloud solutions to close the gaps between data, insights and action. By transforming data into Active Intelligence, businesses can drive better decisions, improve revenue and profitability, and optimize customer relationships. Qlik does business in more than 100 countries and serves over 50,000 customers around the world.
The WebSocket origin allow list grants access to the hub from aliases and redirected addresses. This article explains how to configure the list and includes general DOs and DONT's.
"An error occurred Connection lost" or "Bad Request the http header is incorrect on Qlik Sense Hub"
When replicating an XML datatype, the taks fails with the error:
Line 7866: 00006188: 2022-04-15T13:41:25 [TARGET_APPLY ]E: RetCode: SQL_ERROR SqlState: 42000 NativeError: 9412 Message: [Microsoft][SQL Server Native Client 11.0][SQL Server]XML parsing: line 1, character 524288, '>' expected Line: 1 Column: -1 [1022502] (ar_odbc_stmt.c:2794)
If you cannot use unlimited lob support, please increase the limit lob size to accommodate the complete XML value.
See LOB handling options.
Replicate truncates the data in one of the XML columns. This invalidates the XML.