Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
A scheduled Qlik Replicate task does not show up in the Executed Jobs list.
This is working as intended. The Executed Jobs tab will only show executed jobs that were scheduled to run once only. In other words, jobs scheduled to run periodically (e.g. Daily, Weekly, Monthly) will not be shown.
See Scheduling jobs.
There may be several different symptoms associated with a need to regenerate and redistribute certificates;
This article does not cover the use of a 3rd party certificate for end user Hub access, but the certificates used for communication between the Sense services. For recommendation on how to use a 3rd party certificate for end user access, see How to: Change the certificate used by the Qlik Sense Proxy to a custom third party certificate
Do not perform the below steps in a production environment, without first doing a backup of the existing certificates. Certificates are being used to encrypt information in the QRS database, such as connection strings. By recreating certificates, you may lose information in your current setup.
By removing the old/bad certificates, and restarting the Qlik Sense Repository Service (QRS), the correct certificates can be recreated by the service. If trying to remove certs, only the removal steps need to be followed.
The instructions are to be carried out on the Qlik Sense Central Node. In the case of a multi-node deployment, verify which node is the central node before continuing.
If the current central node role is held by the failover, you need to fail the role back to the original central node by shutting down all the nodes (this implies downtime). Then start the original central node, reissue the certificates on it with this article, and when the central node is working apply the article Rim node not communicating with central node - certificates not installed correctly on each Rim node.
Test all data connections after the certificates are regenerated. It is likely that data connections with passwords will fail. This is because passwords are saved in the repository database with encryption. That encryption is based on a hash from the certificates. When the Qlik Sense signed certificates are regenerated, this hash is no longer valid, and the saved data connection passwords can not be decrypted. The customer must re-enter the passwords in each data connection and save. See article: Repository System Log Shows Error "Not possible to decrypt encrypted string in database"
There is no need to perform a full reinstall to propagate new certificates. Certificates are created by the QRS automatically if not found during the service startup process.
The steps in this section must be performed after recreating certificates as described above.
Execute following query against SenseServices database:
DROP TABLE IF EXISTS hybrid_deployment_service.mt_doc_asymmetrickeysencrypt CASCADE;
Navigate to Deployments page of Multi-cloud Setup Console (MSC).
Delete and re-add any existing deployments by following the steps mentioned in Distributing apps from Qlik Sense Enterprise on Windows to Qlik Sense Enterprise SaaS and Distributing apps to Qlik Sense Enterprise on Kubernetes.
After the certificates have been recreated and then redistributed to all of the rim nodes, the node.js certificates stored locally on the central and all rim nodes also need to be recreated. Follow the below steps to perform this action:
Test all data connections after the certificates are rebuilt. It is likely that data connections with passwords will fail. This is because passwords are saved in the repository database with encryption. That encryption is based on a hash from the certs. When the Qlik Sense self-signed cert is rebuilt, this hash is no longer valid, and so the saved data connection passwords will fail. The customer must re-enter the passwords in each data connection and save. See article: Repository System Log Shows Error "Not possible to decrypt encrypted string in database"
Notice if using an official Signed Server Certificate from a trusted Certificate Authority
The certificate information will also be in the QMC, under Proxies, with the Certificate thumbprint listed. If trying to merely remove all aspects of certs, this will need to be removed as well.
If the above does not work, see Qlik Sense Enterprise Hub and Qlik Management Console (QMC) down - bootstrap fails with "Newly created client certificate not valid; root certificate can't sign new certificates"
This Techspert Talks session addresses:
Tip: Download the LogAnalyzer app here: LogAnalysis App: The Qlik Sense app for troubleshooting Qlik Sense Enterprise on Windows logs.
00:00 - Intro
01:22 - Multi-Node Architecture Overview
04:10 - Common Performance Bottlenecks
05:38 - Using iPerf to measure connectivity
09:58 - Performance Monitor Article
10:30 - Setting up Performance Monitor
12:17 - Using Relog to visualize Performance
13:33 - Quick look at Grafana
14:45 - Qlik Scalability Tools
15:23 - Setting up a new scenario
18:26 - Look QSST Analyzer App
19:21 - Optimizing the Repository Service
21:38 - Adjusting the Page File
22:08 - The Sense Admin Playbook
23:10 - Optimizing PostgreSQL
24:29 - Log File Analyzer
27:06 - Summary
27:40 - Q&A: How to evaluate an application?
28:30 - Q&A: How to fix engine performance?
29:25 - Q&A: What about PostgreSQL 9.6 EOL?
30:07 - Q&A: Troubleshooting performance on Azure
31:22 - Q&A: Which nodes consume the most resources?
31:57 - Q&A: How to avoid working set breaches on engine nodes?
34:03 - Q&A: What do QRS log messages mean?
35:45 - Q&A: What about QlikView performance?
36:22 - Closing
Resources:
LogAnalysis App: The Qlik Sense app for troubleshooting Qlik Sense Enterprise on Windows logs
Qlik Help – Deployment examples
Using Windows Performance Monitor
PostgreSQL Fine Tuning starting point
Qlik Sense Shared Storage – Options and Requirements
Qlik Help – Performance and Scalability
Q&A:
Q: Recently I'm facing Qlik Sense proxy servers RAM overload, although there are 4 nodes and each node it is 16 CPUs and 256G. We have done App optimazation, like delete duplicate app, remove old data, remove unused field...but RAM status still not good, what is next to fix the performace issue? Apply more nodes?
A: Depends on what you mean by “RAM status still not good”. Qlik Data Analytics software will allocate and use memory within the limits established and does not release this memory unless the Low Memory Limit has been reached and cache needs cleaning. If RAM consumption remains high but no other effects, your system is working as expected.
Q: Similar to other database, do you think we need to perform finetuning, cleaning up bad records within PostgresQL , e.g.: once per year?
A: Periodic cleanup, especially in a rapidly changing environment, is certainly recommended. A good starting point: set your Deleted Entity Log table cleanup settings to appropriate values, and avoid clean-up tasks kicking in before user morning rampup.
Q: Does QliKView Server perform similarly to Qlik Sense?
A: It uses the same QIX Engine for data processing. There may be performance differences to the extent that QVW Documents and QVF Apps are completely different concepts.
Q: Is there a simple way (better than restarting QS services)to clean the cache, because chache around 90 % slows down QS?
A: It’s not quite as simple. Qlik Data Analytics software (and by extent, your users) benefits from keeping data cached as long as possible. This way, users consume pre-calculated results from memory instead of computing the same results over and over. Active cache clearing is detrimental to performance. High RAM usage is entirely normal, based Memory Limits defined in QMC. You should not expect Qlik Sense (or QlikView) to manage memory like regular software. If work stops, this does not mean memory consumption will go down, we expect to receive and serve more requests so we keep as much cached as possible. Long winded, but I hope this sets better expectations when considering “bad performance” without the full technical context.
Q: How do we know when CPU hits 100% what the culprit is, for example too many concurrent user loading apps/datasets or mutliple apps qvds reloading? can we see that anywhere?
A: We will provide links to the Log Analysis app I demoed during the webinar, this is a great place to start. Set Repository Performance logs to DEBUG for the QRS performance part, start analysing service resource usage trends and get to know your user patterns.
Q: Can there be repository connectivity issues with too many nodes?
A: You can only grow an environment so far before hitting physical limits to communication. As a best practice, with every new node added, a review of QRS Connection Pools and DB connectivity should be reviewed and increased where necessary. The most usual problem here is: you have added more nodes than connections are allowed to DB or Repository Services. This will almost guarantee communication issues.
Q: Does qlik scalability tools measure browser rendering time as well or just works on API layer?
A: Excellent question, it only evaluates at the API call/response level. For results that include browser-side rendering, other tools are required (LoadRunner, complex to set up, expert help needed).
Transcript:
Hello everyone and welcome to the November edition of Techspert Talks. I’m Troy Raney and I’ll be your host for today's session. Today's presentation is Optimizing Performance for Qlik Sense Enterprise with Mario Petre. Mario why don't you tell us a little bit about yourself?
Hi everyone; good to be here with everybody once again. My name is Mario Petre. I’m a Principal Technical Engineer in the Signature Support Team. I’ve been with Qlik over six years now and since the beginning, I’ve focused on Qlik Sense Enterprise backend services, architecture and performance from the very inception of the product. So, there's a lot of historical knowledge that I want to share with you and hopefully it's an interesting springboard to talk about performance.
Great! Today we're going to be talking about how a Qlik Sense site looks from an architectural perspective; what are things that should be measured when talking about performance; what to monitor after going live; how to troubleshoot and we'll certainly highlight plenty of resources and where to find more details at the end of the session. So Mario, we're talking about performance for Qlik Sense Enterprise on Windows; but ultimately, it's software on a machine.
That's right.
So, first we need to understand what Qlik Sense services are and what type of resources they use. Can you show us an overview from what a multi-node deployment looks like?
Sure. We can take a look at how a large Enterprise environment should be set up.
And I see all the services have been split out onto different nodes. Would you run through the acronyms quickly for us?
Yep. On a consumer node this is where your users come into the Hub. They will come in via the Qlik Proxy Service and consume applications via the Qlik Engine Service, that ultimately connects to the central node and everything else via the Qlik Repository Service.
Okay.
The green box is your front-end services. This is what end users tap into to consume data, but what facilitates that in the background is always the Repository Service.
And what's the difference between the consumer nodes on the top and the bottom?
These two nodes have a Proxy Service that balances against their own engines as well as other engines; while the consumer nodes at the bottom are only there for crunching data.
Okay.
And then we can take a look at the backend side of things. Resources are used to the extent that you're doing reloads, you will have an engine there as well as the primary role for the central node, active and failover which is: the Repository Service to coordinate communication between all the rest of the services. You can also have a separate node for development work. And ultimately we also expect the size of an environment to have a dedicated storage solution and a dedicated central Repository Database host either locally managed or in one of the cloud providers like AWS RDS for example.
Between the front-end and back-end services where's the majority of resource consumption, and what resources do they consume?
Most of the resource allocation here is going to go to the Engine Service; and that will consume CPU and RAM to the extent that it's allocated to the machine. And that is done at the QMC level where you set your Working Set Limits. But in the case of the top nodes, the Proxy Service also has a compute cost as it is managing session connectivity between the end user's browser and the Engine Service on that particular server. And the Repository Service is constantly checking the authorization and permissions. So, ultimately front-end servers make use of both front-end and back-end resources. But you also need to think about connectivity. There is the data streaming from storage to the node where it will be consumed and then loading from that into memory. And these are three different groups of resources: you have compute; you have memory, and you have network connectivity. And all three have to be well suited for the task for this environment to work well.
And we're talking about speed and performance like, how fast is a fast network? How can we even measure that?
So, we would start for any Enterprise environment, we would start at a 10 Gb network speed and ultimately, we expect response time of 4 MS between any node and the storage back end.
Okay. So, what are some common bottlenecks and issues that might arise?
All right. So, let's take a look at some at some examples. The Repository Service failing to communicate with rim nodes, with local services. I would immediately try to verify that the Repository Service connection pool and network connectivity is stable and connect. Let's say apps load very very slow for the first time. This is where network speed really comes into play. Another example: the QMC or the Hub takes a very very long time to load. And for that, we would have to look into the communication between the Repository Service and the Database, because that's where we store all of this metadata that we will try to calculate your permissions based on.
And could that also be related to the rules that people have set up and the number of users accessing?
Absolutely. You can hurt user experience by writing complex rules.
What about lag in the app itself?
This is now being consumed by the Engine Service on the consumer node. So, I would immediately try to evaluate resource consumption on that node, primarily CPU. Another great example for is high Page File usage. We prefer memory for working with applications. So, as soon as we try to cache and pull those results again from disk, performance we'll be suffering. And ultimately, the direct connectivity. How good and stable is the network between the end users machine and the Qlik Sense infrastructure? The symptom will be on the end user side, but the root cause is almost always (I mean 99.9% of the time) will be down to some effect in the environment.
So, to get an understanding of how well the machine works and establish that baseline, what can we use?
One simple way to measure this (CPU, RAM, disk network) is this neat little tool called iPerf.
Okay. And what are we looking at here?
This is my central node.
Okay. And iPerf will measure what exactly?
How fast data transfer is between this central node and a client machine or another server.
And where can people find iPerf?
Great question. iPerf.fr
And it's a free utility, right?
Absolutely.
So, I see you've already got it downloaded there.
Right. You will have to download this package, both on the server and the client machine that you want to test between. We'll run this “As Admin.” We call out the command; we specify that we want it to start in “server mode.” This will be listening for connection attempts.
Okay.
We can define the port. I will use the default one. Those ports can be found in Qlik Help.
Okay.
The format for the output in megabyte; and the interval for refresh 5 seconds is perfectly fine. And then, we want as much output as possible.
Okay.
First, we need to run this. There we go. It started listening. Now, I’m going to switch to my client machine.
So, iPerf is now listening on the server machine and you're moving over to the client machine to run iPerf from there?
Right. Now, we've opened a PowerShell window into iPerf on the client machine. Then we call the iPerf command. This time, we're going to tell it to launch in “Client Mode.” We need to specify an IP address for it to connect to.
And that's the IP address of the server machine?
Right. Again, the port; the format so that every output is exactly the same. And here, we want to update every second.
Okay.
And this is a super cool option: if we use the bytes flag, we can specify the size of the data payload. I’m going to go with a 1 Gb file (1024 Mb). You can also define parallel connections. I want 5 for now.
So, that's like 5 different users or parallel streams of activity of 1 Gb each between the server machine and this client machine?
Right. So, we actually want to measure how fast can we acquire data from the Qlik Sense server onto this client machine. We need to reverse the test. So, we can just run this now and see how fast it performs.
Okay. And did the server machine react the same way?
You can see that it produced output on the listening screen. This is where we started. And then it received and it's displaying its own statistics. And if you want to automate this, so that you have a spot check of throughput capacity between these servers, we need to use the log file option. And then we give it a path. So, I’m gonna say call this “iperf_serverside…” And launch it. And now, no output is produced.
Okay.
So, we can switch back to the client machine.
Okay. So, you're performing the exact same test again, just storing everything in a log file.
The test finished.
Okay. So, that can help you compare between what's being sent to what's being received, and see?
Absolutely. You can definitely have results presented in a way that is easy to compare across machines and across time. And initial results gave us a throughput per file of around 43.6, 46, thereabouts megabytes per second.
So, what about for an end user who's experiencing issues? Can you use iPerf to test the connectivity from a user machine on a different network?
Yep. So, in in the background we will have our server; it's running and waiting for connections. And let's run this connection now from the client machine. We will make sure that the IP address is correct; default port; the output format in megabytes; we want it refreshed every second; and we are transferring 1 Gb; and 5 parallel streams in reverse order. Meaning: we are copying from the server to the client machine. And let's run it.
Just seeing those numbers, they seem to be smaller than what we're seeing from the other machine.
Right. Indeed. I have some stuff in between to force it to talk a little slower. But this is one quick way to identify a spotty connection. This is where a baseline becomes gold; being able to demonstrate that your platform is experiencing a problem. And to quantify and to specify what that problem is going to reduce the time that you spend on outages and make you more effective as an admin.
Okay. That was network. How can admins monitor all the other performance aspects of a deployment? What tools are available and what metrics should they be measuring?
Right. That's a great question. The very basic is just Performance Monitor from Windows.
Okay.
The great thing about that is that we provide templates that also include metrics from our services.
Can you walk us through how to set up the Performance Monitor using one of those templates?
Sure thing. So, we're going to switch over first to the central node. So, the first thing that I want to do is create a folder where all of these logs will be stored.
Okay. So, that's a shared folder, good.
And this article is a great place to start. So, we can just download this attachment
So, now it's time to set up a Performance Monitor proper. We need to set up a new Data Collector Set.
Giving it a name.
And create from template. Browse for it, and finish.
Okay. So it’s got the template. That's our new one Qlik Sense Node Monitor, right?
Yep. You'll have multiple servers all writing to the same location. The first thing is to define the name of each individual collector; and you do that here. And you can also provide subdirectory for these connectors, and I suggest to have one per node name. I will call this Central Node.
Everything that comes from this node, yeah.
Correct. You can also select a schedule for when to start these. We have an article on how to make sure that Data Collectors are started when Windows starts. And then a stop condition.
Now, setting up monitors like this; could this actually impact performance negatively?
There is always an overhead to collecting and saving these metrics to a file. But the overhead is negligible.
Okay.
I am happy with how this is defined. Now, this static collector on one of the nodes is already set up. There is an option here that's called Data Manager. What's important here to define is to set a Minimum Free Disk. We could go with 10 Gb, for example; and you can also define a Resource Policy. The important bit is Minimum Free Disk. We want to Delete the Oldest (not the largest) in the Data Collector itself. We should change that directory and make sure that it points to our central location instead of locally; and we'll have to do this for every single node where we set this up.
Okay. So, that's that shared location?
Yep.
And you run the Data Collector there. And it creates a CSV file with all those performance counters. Cool.
So, here we have it now. If we just take a very quick look inside, we'll see a whole bunch of metrics. And if you want to visualize these really really quick, I can show you a quick tip that wasn't on the agenda but since we're here: on Windows, there is a built-in tool called Relog that is specifically designed for reformatting Performance Monitor counters. So, we can use Relog; we'll give it the name of this file; the format will be Binary; the output will be the same, but we'll rename it to BLG; and let's run it.
And now it created a copy in Binary format. Cool thing about this Troy is that: you can just double click on it.
It's already formatted to be a little more readable. Wow! Check that out.
There we go. Another quick tip: since we're here, first thing to do is: select everything and Scale; just to make sure that you're not missing any of the metrics. And this is also a great way to illustrate which service counters and system counters we collect. As you can see, there's quite a few here.
Okay. So, that Performance Monitor is, it's set up; it's running; we can see how it looks; and that is going to run all the time or just when we manually trigger it?
You can definitely configure it to run all the time, and that would be my advice. Its value is really realized as a baseline.
Yeah. Exactly. That was pretty cool seeing how that worked, using all the built-in utilities. And that Relog formatting for the Process Monitor was new to me. Are there any other tools you like to highlight?
Yeah. So, Performance Monitor is built-in. For larger Enterprises that may already be monitoring resources in a centralized way, there's no reason why you shouldn't expect to include the Sense resources into that live monitoring. And this could be done via different solutions out there. A few come to mind like: Grafana, Datadog, Butler SOS, for example from one of our own Qlik luminaries.
Can we take a quick look at Grafana? I’ve heard of that but never seen it.
Sure thing. This is my host monitor sheet. It's nowhere built to a corporate standard, but you can see here I’m looking at resources for the physical host where these VMs are running as well as the domain controller, and the main server where we've been running our CPU tests. And the great part about this is I have historical data as far back I believe as 90 days.
So, this is a cool tool that lets you like take a look at the performance and zoom-in and find the processes that might be causing some peaks or anything you want to investigate?
Right. Exactly. At least come up with a with a narrow time frame for you to look into the other tools and again narrow down the window of your investigation.
Yeah, that could be really helpful. Now I wanted to move on to the Qlik Sense Scalability Tools. Are those available on Qlik community?
That's right. Let me show you where to find them. You can see that we support all current versions including some of the older ones. You will have to go through and download the package and the applications used for analysis afterwards. There is a link over here. So, once the package is downloaded, you will get an installer. And the other cool thing about Scalability Tools is that you can use it to pre-warm the cache on certain applications since Qlik Sense Enterprise doesn't support application pre-loading.
Oh, cool. So, you can throttle up applications into memory like in QlikView. Can we take a look at it?
Yes, absolutely. This is the first thing that you'll see. We'll have to create a new connection. So, I’ll open a simple one that I’ve defined here and we can take a look at what's required just to establish a quick connection to your Qlik Sense site.
Okay, but basically the scenario that you're setting up will simulate activity on a Qlik Sense site to test its performance?
Exactly. You'll need to define your server hostname. This can be any of your proxy nodes in the environment. The virtual proxy prefix. I’ve defined it as Header and authentication method is going to be WebSocket.
Okay.
And then, if we want to look at how virtual users are going to be injected into the system, scroll over here to the user section. Just for this simple test, I’ve set it up for User List where you can define a static list of users like so: User Directory and UserName.
Okay. So, it's going to be taking a look at those 2 users you already predefined and their activity?
Exactly. We need to test the connection to make sure that we can connect to the system. Connection Successful. And then we can proceed with the scenario. This is very simple but let me show you how I got this far. So, the very first thing that we should do is to Open an App.
So, you're dragging away items?
Yep. I’m removing actions from this list. Let's try to change the sheet. A very simple action. And now we have four sheets, and we'll go ahead and select one of them.
Okay, so far, we have Opening the App and immediately changing to a sheet?
Yep. That's right. This will trigger actions in sequence exactly how you define them. It will not take into consideration things like Think Time. I will just define a static weight of 15 seconds, and then you can make selections.
But this is an amazing tool for being able to kind of stress test your system.
It's very very useful and it also provides a huge amount of detail within the results that it produces. One other quick tip: while defining your scenario, use easy to read labels, so that you can identify these in the Results Application. Let's assume that the scenario is defined. We will go ahead and add one last action and that is: to close, to Disconnect the app. We'll call this “OpenApp.” We'll call this “SheetChange.” Make sure you Save. The connection we've tested; we've defined our list of users. First, let's run the scenario. There is one more step to define and that is: to configure an Executor that will use this scenario file to launch a workload against our system. Create a New Sequence.
This is just where all these settings you're defining here are saved?
Correct. This is simply a mapping between the execution job that you're defining and which script scenario should be used. We'll go ahead and grab that. Save it again; and now we can start it. And now in the background if we were to monitor the Qlik Sense environment, we would see some amount of load coming in. We see that we had some kind of issue here: empty ObjectID. Apparently I left something in the script editor; but yeah, you kind of get the idea.
So, all this performance information would then be loaded into an app that is part of the package downloaded from Qlik community. How does that look?
So, here you will see each individual result set, and you can look at multiple-exerciser runs in the single application. Unfortunately, we don't have more than one here to showcase that, but you would see multiple-colored lines. There is metrics for a little bit of everything: your session ramp, your throughput by minute, you can change these.
CPU, RAM. This is great.
Exactly. CPU and RAM. These are these are not connected. We don't have those logs, but you would have them for a setup run on your system. These come from Performance Monitor as well, so you could just use those logs provided that the right template is in place. We see Response Time Distribution by Action, and these are the ones that I’ve asked you to change and name so that they're easy to understand.
Once your deployment is large enough to need to be multi-node and the default settings are no longer the best ones for you, what needs to be adjusted with a Repository Service to keep it from choking or to improve its performance?
That's a great question Troy. So, the first thing that we should take a look at is how the Repository communicates with the backend Database and vice versa. The connection pool for the Repository is always based on core count on the machine. And the best rule of thumb that we have to date is to take your core count on that machine, multiply it by 5, and that will be the max connection pool for the Repository Service for that node.
Can you show us where that connection pool setting can be changed?
Yes. So, we will go ahead and take a look. Here we are on the central node of my environment. You'll have to find your Qlik installation folder. We'll navigate to the Repository folder, Util, QlikSenseUtil, and we'll have to launch this “As Admin.”
Okay.
We'll have to come to the Connection String Editor. Make sure that the path matches. We just have to click on Read so that we get the contents of these files. And the setting that we are about to change is this one.
Okay. So, the maximum number of connections that the Repository can make?
Yes. And this is (again) for each node going towards the Repository Database.
Okay.
Again, this should be a factor of CPU cores multiplied by 5. If 90 is higher than that result, leave 90 in place. Never decrease it.
Okay, that's a good tip.
Right. I change this to 120. I have to Save. What I like to do here is: clear the screen and hit Read again; just to make sure that the changes have been persisted in the file.
Okay.
Once that's done, we can close this. We can restart the environment. We can get out of here.
So, there you adjusted the setting of how many connections this node can make to the QSR. Then assuming we do the same on all nodes, where do we adjust the total number of connections the Repository itself can receive?
That should be a sum of all of the connection strings from all of your nodes plus 110 extra for the central node. By default, here is where you can find that config file: Repository, PostgreSQL, and we'll have to open this one, PostgreSQL. Towards the end of the file…
Just going all the way to the bottom.
Here we have my Max Connections is 300.
Okay. One other setting you mentioned was the Page File and something to be considered. How would we make changes or adjust that setting?
Right. So, this is a Windows level setting that's found in Advanced System Settings; Advanced tab; Performance; and then again Advanced; and here we have Virtual Memory.
Okay.
We have to hit Change. We'll have to leave it at System Managed or understand exactly which values we are choosing and why. If you're not sure, the default should always be System Managed.
Now, I want to know what resources are available for Qlik Sense admins; specifically, what is the Admin Playbook?
It's a great starting place for understanding what duties and responsibilities one should be thinking about when administering a Qlik Sense site.
So, these are a bunch of tools built by Qlik to help analyze your deployment in different ways. I see weekly, monthly, quarterly, yearly, and a lot of different things are available there.
Yeah. So, we can take a look at Task Analysis, for example. The first time you run it, it's going to take about 20 minutes; thereafter about 10. The benefits: it shows you really in depth how to get to the data and then how to tweak the system to work better based on what you have.
Yeah, that's great.
Right? So, not only we put the tools in your hands, but also how to build these tools as you can here. See here, we have instructions on how to come up with these objects from scratch. An absolute must-read for every system admin out there.
Mario, we've talked about optimizing the Qlik Sense Repository Service, but not about Postgres? Do larger Enterprise level deployments affect its performance?
Sure. The thing about Postgres is again: we have to configure it by default for compatibility and not performance. So, it's another component that has to be targeted for optimization.
The detail there that anything over 1 Gb from Postgres might get paged - that sounds like it could certainly impact performance.
Right, because the buffer setting that we have by default is set to 1 Gb; and that means only 1 Gb of physical memory will be allocated to Postgres work. Now, we're talking about the large environment 500 to maybe 5,000 apps. We're talking 1000s of users with about 1000 of them peak concurrency per hour.
So, can we increase that Shared Buffer setting?
Absolutely. And in fact, I want to direct you to a really good article on performance optimization for PostgreSQL. And when we talk about fine-tuning, this article is where I’d like to get started. We talk about certain important factors like the Shared Buffers. So, this is what we define to 1 Gb by default. Their recommendation is to start with 1/4 of physical memory in your system. 1 Gb is definitely not one quarter of the machines out there. So, it needs tweaking.
And again these are settings to be changed on the machine that's hosting the Repository Database, right?
That's correct. That's correct.
Now, is there an app that you're aware of that would be good to kind of look at all these logs and analyze what's going on with the performance?
Absolutely. This is an application that was developed to better understand all of the transactions happening in a particular environment. It reads the log files collected with the Log Collector either via the tool or the QMC itself.
Okay.
It's not built for active monitoring, but rather to enhance troubleshooting.
Sure. So, basically it's good for looking at a short period of time to help troubleshooting?
Right. The Repository itself communicates over APIs between all the nodes and keeps track of all of the activities in the system; and these translate to API calls. If we want to focus on Repository API calls, we can start by looking at transactions.
Okay.
So, this will give us detail about cost. For example, per REST call or API call, we can see which endpoints take the most, duration per user, and this gives you an opportunity to start at a very high level and slowly drill in both in message types and timeframe. Another sheet is the Threads Endpoints and Users; and here you have performance information about how many worker-threads the Repository Service is able to start, what is the Repository CPU consumption, so you can easily identify one. For example, here just by discount, we can see that the preview privileges call for objects is called…
Yeah, a lot.
Over half a million times, right? And represents 73% of the CPU compute cost.
Wow, nice insights.
And then if we look here at the bottom, we can start evaluating time-based patterns and select specific time frames and go into greater detail.
So, I’m assuming this can also show resource consumption as well?
Right. CPU, memory in gigabytes and memory in percent. One neat trick is: to go to the QMC, look at how you've defined your Working Set Limits, and then pre-define reference lines in this chart. So, that it's easier to visualize when those thresholds are close to being reached or breached. And you do that by the add-ons reference lines, and you can define them like this.
That's just to sort of set that to match what's in the QMC?
Exactly.
Makes a powerful visualization. So, you can really map it.
Absolutely. And you can always drill down into specific points in time we can go and check the log details Engine Focus sheet; and this will allow us to browse over time, select things like errors and warnings alone, and then we will have all of the messages that are coming from the log files and what their sources.
Yeah. That's great to have it all kind of collected here in one app, that's great.
Indeed.
To summarize things, we've talked about to understand system performance, a baseline needs to be established. That involves setting up some monitoring. There are lots of options and tools available to do that; and it's really about understanding how the system performs so the measurement and comparisons are possible if things don't perform as expected.
And to begin to optimize as well.
Okay, great. Well now, it's time for Q&A. Please submit your questions through the Q&A panel on the left side of your On24 console. Mario, which question would you like to address first?
We have some great questions already. So, let's see - first one is: how can we evaluate our existing Qlik Sense applications?
This is not something that I’ve covered today, but it's a great question. We have an application on community called App Metadata Analyzer. You can import this into your system and use it to understand the memory footprint of applications and objects within those applications and how they scale inside your system. It will very quickly illustrate if you are shipping applications with extremely large data files (for example) that are almost never used. You can use that as a baseline for both optimizing local applications and also in your efforts to migrating to SaaS, if you feel like you don't want to bother with all of this Performance Monitoring and optimization, you can always choose to use our services and we'll take care of that for you.
Okay, next question.
So, the next question: worker schedulers errors and engine performance. How to fix?
I think I would definitely point you back to this Log Analysis application. Load that time frame where you think something bad happened, and see what kind of insights you can you can get by playing with the data, by exploring the data. And then narrow that search down if you find a specific pattern that seems like the product is misbehaving. Talk to Qlik support. We'll evaluate that with you and determine whether this is a defect or not or if it's just a quirk of how your system is set up. But that Sense Log Analysis app is a great place to start. And going back to the sheet that I showed: Repository and Engine metrics are all collected there. And these come from the performance logs that we already produce from Qlik Sense. You don't need to load any additional performance counters to get those details.
Okay.
All right. So, there is a question here about Postgres 9.6 and the fact that it's soon coming end of life. And I think this is a great moment to talk about this. Qlik Sense client-managed or Qlik Sense Enterprise for Windows supports Postgres 12.5 for new installations since the May release. If you have an existing installation, 9.6 will continue to be used; but there is an article on community on how to in-place upgrade that to 12.5 as a standalone component. So, you don't have to continue using 9.5 if your IT policy is complaining about the fact that it's soon coming to the end of life. As we say, we are aware of this fact; and in fact, we are shipping a new version as of the May 2021 release.
Oh, great.
So, here's an interesting question. If we have Qlik Sense in Azure on a virtual machine, why is the performance so sluggish? How do you fine-tune it? I guess first we need to understand what would you mean by sluggish? But the first thing that I want to point to is: different instance types. So, virtual machines in virtual private cloud providers are optimized for different workloads. And the same is true for AWS, Azure and Google Cloud platform. You will have virtual machines that are optimized for storage; ones that are optimized for compute tasks or application analytics; some that are optimized for memory. Make sure that you've chosen the right instance type and the right level of provisioned iOps for this application. If you feel that your performance is sluggish, start increasing those resources. Go one tier up and reevaluate until you find a an instance type that works for you. If you wish to have these results (let's say beforehand), you will have to consider using the Scalability Tools together with some of your applications against different instance types in Azure to determine which ones work best.
Just to kind of follow up on that question, if we're looking at that multi-node example from Qlik help, what nodes would you consider would require more resources?
Worker nodes in general. And those would be front and back-end.
So, a worker node is something with an engine, right?
Exactly. Something with an engine. It can either be front-facing together with a proxy to serve content, or back-end together with a scheduler a service to perform reload tasks. These will consume all the resources available on a given machine.
Okay.
And this is how the Qlik Sense engine is developed to work. And these resources are almost never released unless there is a reason for it, because us keeping those results cached is what makes the product fast.
Okay.
Oh, here's a great one about avoiding working set breaches on engine nodes. Question says: do you have any tips for avoiding the max memory threshold from the QIX engine? We didn't really cover this this aspect, but as you know the engine allows you to configure memory limits both for the lower and higher memory limit. Understanding how these work; I want to point you back to that QIXs engine white paper. The system will perform certain actions when these thresholds are reached. The first prompt that I have for you in this situation is: understand if these limits are far away from your physical memory limit. By default, Qlik Sense (I believe) uses 70 / 90 as the low and high working sets on a machine. With a lot of RAM, let's say 256 - half a terabyte of RAM, if you leave that low working set limit to 70 percent, that means that by default 30 of your physical RAM will not be used by Qlik Sense. So. always keep in mind that these percentages are based on physical amount of RAM available on the machine, and as soon as you deploy large machines (large: I’m talking 128 Gb and up) you have to redefine these parameters. Raise them up so that you utilize almost all of the resources available on the machine ,and you should be able to visualize that very very easily in the Log Analysis App by going to Engine Load sheet and inserting those reference lines based on where your current working sets are. Of course, the only way really to avoid a working set limit issue is to make sure that you have enough resources. And the system is configured to utilize those resources, so even if you still get them after raising the limit and allowing the - allowing the product to use as much RAM as it can without of course interfering with Windows operations (which is why you should never set these to like 99, 98, 99). Windows needs RAM to operate by itself, and if we let Qlik Sense to take all of it, it will break things. If you've done that and you're still having performance issues, that means you need more resources.
Yeah. It makes sense.
Oh, so here is another interesting question about understanding what certain Qlik Repository Service (QRS) log messages say. There is a question here that says: try to meet the recommendation of network and persistence the network latency should be less than 4 MS, but consistently in our logs we are seeing the QRS security management retrieved privileges in so many milliseconds. Could this be a Repository Service issue or where would you suggest we investigate first? This is an info level message that you are reporting. And it's simply telling you how long it took for the Repository Service to compute the result for that request. That doesn't mean that this is how long it took to talk to the Database and back, or how long it took for the request to reach from client to the server; only how long it took for the Repository Service to look up the metadata look up the security rules and then return a result based on that. And I would say this coming back in 384 milliseconds is rather quick. It depends on how you've defined these security rules. If these security rules are super simple and you are still getting slow responses, we would definitely have to look at resource consumption. But if you want to know how these calls affect resource consumption on the Repository and Postgres side, go back to that Log Analysis App. Raise your Repository performance logs in the QMC to Debug levels so that you get all of the performance information of how long each call took to execute. And try to establish some patterns. See if you have calls that take longer to execute than others; and where are those coming from any specific apps, any specific users? All of these answers come from drilling down into the data via that app that I demoed.
Okay Mario, we have time for one last question.
Right. And I think this is an excellent one to end. We talked a whole bunch here about Qlik Sense, but all of this also applies to QlikView environments. We are always looking at taking a step back and considering all of the resources that are playing in the ecosystem, not just the product itself. And the question asks: is QlikView Server performance similar to how it handles resources Qlik Sense? The answer is: yes. The engine is exactly the same in both products. If you read that white paper, you will understand how it works in both QlikView and Qlik Sense. And the things that you should do to prepare for performance and optimization are exactly the same in both products. Excellent question.
Great. Well, thank you very much Mario!
Oh, it's been my pleasure Troy. That was it for me today. Thank you all for participating. Thank you all for showing up. Thank you Troy for helping me through this very very complicated topic. It's been a blast as always. And to our customers and partners, looking forward to seeing your questions and deeper dives into logs and performance on community.
Okay, great! Thank you everyone! We hope you enjoyed this session. Thank you to Mario for presenting. We appreciate getting experts like Mario to share with us. Here's our legal disclaimer and thank you once again. Have a great rest of your day.
How are sessions counted in Qlik Sense?
The following are examples of how sessions are counted within Qlik Sense.
Sessions will be terminated after the currently configured Session timeout in the Qlik Sense Proxy.
If the Qlik Sense engine or Proxy are terminated or crash, Sessions are ended right away.
Once the maximum number of parallel user connections (5) is reached, this will be documented in the AuditSecurity_Repository log. To identify if this is the issue, review the relevant log and review how the user is interacting with the system.
The log is stored in:
C:\Programdata\Qlik\Sense\Log\Repository\Audit\AuditSecurity_Repository.txt
The related message reads:
Access was denied for User: 'Domain\USER', with AccessID '264ff070-6306-4f1b-85db-21a8468939b5', SessionID: 'e3cd957b-a501-4bec-a3f8-d35170a73efa', SessionCount: '5', Hostname: '::1', OperationType: 'UsageDenied'
Troubleshoot too many sessions active in parallel
Qlik Sense April 2018 and later- Service account getting "You cannot access Qlik Sense because you have no access pass"
The Qlik Sense log files can be easily collected using the Log Collector.
The Log Collector is embedded in the Qlik Sense Management Console. It is the last item listed in the Configure Systems section.
For instructions on how to use the Log Collector, see Log collector (help.qlik.com).
Content
The user must be a root admin and have administrative permissions.
The best way to gather these logs is to use the Qlik Sense Log Collector. If the tool is not included in your install, it can be downloaded from this article.
This list provides an overview of what system information the Qlik Sense Log collector accesses and collects.
C:\Windows\System32\whoami.exe
C:\Windows\System32\netstat.exe -anob
C:\Windows\System32\tasklist.exe /v
C:\Windows\System32\netsh.exe advfirewall show allprofiles
C:\Windows\System32\ipconfig.exe /all
C:\Windows\System32\iisreset.exe /status
C:\Windows\System32\reg.exe query "HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings" | find /i "ProxyEnable"
C:\Windows\System32\reg.exe query "HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings" | find /i "proxyserver"
C:\Windows\System32\reg.exe query "HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings" | find /i "AutoConfigURL"
C:\Windows\System32\reg.exe query "HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings" | find /i "ProxyOverride"
C:\Windows\System32\ping.exe google.com
C:\Windows\System32\net.exe use
C:\Windows\System32\wbem\wmic.exe /OUTPUT:STDOUT logicaldisk get size
C:\Windows\System32\net.exe localgroup "Administrators"
C:\Windows\System32\net.exe localgroup "Qlik Sense Service Users"
C:\Windows\System32\net.exe localgroup "Performance Monitor users"
C:\Windows\System32\net.exe localgroup "QlikView Administrators"
C:\Windows\System32\net.exe localgroup "QlikView Management API"
C:\Windows\System32\systeminfo.exe
C:\Windows\System32\wbem\wmic.exe /OUTPUT:STDOUT product get name
C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -command "gwmi win32_service | select Started
C:\Windows\System32\gpresult.exe /z
C:\Windows\System32\secedit.exe /export /areas USER_RIGHTS /cfg
C:\Windows\System32\secedit.exe /export /areas
C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -command "Get-ChildItem -Recurse Cert:\currentuser\my | Format-list"
C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -command "Get-ChildItem -Recurse Cert:\currentuser\Root | Format-list"
C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -command "Get-ChildItem -Recurse Cert:\localmachine\my | Format-list"
C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -command "Get-ChildItem -Recurse Cert:\localmachine\Root | Format-list"
$FormatEnumerationLimit=-1;$Session = New-Object -ComObject Microsoft.Update.Session;$Searcher = $Session.CreateUpdateSearcher();$historyCount = $Searcher.GetTotalHistoryCount();$Searcher.QueryHistory(0, $historyCount) | Select-Object Title, Description, Date, @{name="Operation"; expression={switch($_.operation){1 {"Installation"}; 2 {"Uninstallation"}; 3 {"Other"}}}} | out-string -Width 1024
C:\Windows\System32\netsh.exe http show urlacl
C:\Windows\System32\netsh.exe http show sslcert
Any versions of Qlik Sense Enterprise on Windows prior to May 2021, do not include the Log Collector.
If the Qlik Sense Log collector does not work then you can manually gather the logs.
For information on when logs are archived, see How logging works in Qlik Sense Enterprise on Windows.
Persistence Mechanism Current Logs(Active Logs) Archived Logs
Shared (Sense 3.1 and newer) | C:\ProgramData\Qlik\Sense\Log | Defined in the QMC under CONFIGURE SYSTEM > Service Cluster > Archived logs root folder Example enter (\\QLIKSERVER\QlikShare\ArchivedLogs) |
Synchronized (Sense 3.1 and older) | C:\ProgramData\Qlik\Sense\Log | C:\ProgramData\Qlik\Sense\Repository\Archived Logs |
Note: Depending on how long the system has been running, this folder can be very large so you will want to include only logs from the time frame relevant to your particular issue; preferably a day before the issue began occurring.
Qlik Sense Enterprise Client-Managed offers a range of Monitoring Applications that come pre-installed with the product.
Qlik Cloud offers the Data Capacity Reporting App for customers on a capacity subscription, and additionally customers can opt to leverage the Qlik Cloud Monitoring apps.
This article provides information on available apps for each platform.
The Data Capacity Reporting App is a Qlik Sense application built for Qlik Cloud, which helps you to monitor the capacity consumption for your license at both a consolidated and a detailed level. It is available for deployment via the administration activity center in a tenant with a capacity subscription.
The Data Capacity Reporting App is a fully supported app distributed within the product. For more information, see Qlik Help.
The Access Evaluator is a Qlik Sense application built for Qlik Cloud, which helps you to analyze user roles, access, and permissions across a tenant.
The app provides:
For more information, see Qlik Cloud Access Evaluator.
The Answers Analyzer provides a comprehensive Qlik Sense dashboard to analyze Qlik Answers metadata across a Qlik Cloud tenant.
It provides the ability to:
For more information, see Qlik Cloud Answers Analyzer.
The App Analyzer is a Qlik Sense application built for Qlik Cloud, which helps you to analyze and monitor Qlik Sense applications in your tenant.
The app provides:
For more information, see Qlik Cloud App Analyzer.
The Automation Analyzer is a Qlik Sense application built for Qlik Cloud, which helps you to analyze and monitor Qlik Application Automation runs in your tenant.
Some of the benefits of this application are as follows:
For more information, see Qlik Cloud Automation Analyzer.
The Entitlement Analyzer is a Qlik Sense application built for Qlik Cloud, which provides Entitlement usage overview for your Qlik Cloud tenant for user-based subscriptions.
The app provides:
For more information, see The Entitlement Analyzer.
The Reload Analyzer is a Qlik Sense application built for Qlik Cloud, which provides an overview of data refreshes for your Qlik Cloud tenant.
The app provides:
For more information, see Qlik Cloud Reload Analyzer.
The Report Analyzer provides a comprehensive dashboard to analyze metered report metadata across a Qlik Cloud tenant.
The app provides:
For more information, see Qlik Cloud Report Analyzer.
Do you want to automate the installation, upgrade, and management of your Qlik Cloud Monitoring apps? With the Qlik Cloud Monitoring Apps Workflow, made possible through Qlik's Application Automation, you can:
For more information and usage instructions, see Qlik Cloud Monitoring Apps Workflow Guide.
The OEM Dashboard is a Qlik Sense application for Qlik Cloud designed for OEM partners to centrally monitor usage data across their customers’ tenants. It provides a single pane to review numerous dimensions and measures, compare trends, and quickly spot issues across many different areas.
Although this dashboard is designed for OEMs, it can also be used by partners and customers who manage more than one tenant in Qlik Cloud.
For more information and to download the app and usage instructions, see Qlik Cloud OEM Dashboard & Console Settings Collector.
With the exception of the Data Capacity Reporting App, all Qlik Cloud monitoring applications are provided as-is and are not supported by Qlik. Over time, the APIs and metrics used by the apps may change, so it is advised to monitor each repository for updates and to update the apps promptly when new versions are available.
If you have issues while using these apps, support is provided on a best-efforts basis by contributors to the repositories on GitHub.
The Operations Monitor loads service logs to populate charts covering performance history of hardware utilization, active users, app sessions, results of reload tasks, and errors and warnings. It also tracks changes made in the QMC that affect the Operations Monitor.
The License Monitor loads service logs to populate charts and tables covering token allocation, usage of login and user passes, and errors and warnings.
For a more detailed description of the sheets and visualizations in both apps, visit the story About the License Monitor or About the Operations Monitor that is available from the app overview page, under Stories.
Basic information can be found here:
The License Monitor
The Operations Monitor
Both apps come pre-installed with Qlik Sense.
If a direct download is required: Sense License Monitor | Sense Operations Monitor. Note that Support can only be provided for Apps pre-installed with your latest version of Qlik Sense Enterprise on Windows.
The App Metadata Analyzer app provides a dashboard to analyze Qlik Sense application metadata across your Qlik Sense Enterprise deployment. It gives you a holistic view of all your Qlik Sense apps, including granular level detail of an app's data model and its resource utilization.
Basic information can be found here:
App Metadata Analyzer (help.qlik.com)
For more details and best practices, see:
App Metadata Analyzer (Admin Playbook)
The app comes pre-installed with Qlik Sense.
Looking to discuss the Monitoring Applications? Here we share key versions of the Sense Monitor Apps and the latest QV Governance Dashboard as well as discuss best practices, post video tutorials, and ask questions.
LogAnalysis App: The Qlik Sense app for troubleshooting Qlik Sense Enterprise on Windows logs
Sessions Monitor, Reloads-Monitor, Log-Monitor
Connectors Log Analyzer
All Other Apps are provided as-is and no ongoing support will be provided by Qlik Support.
Content
Qlik Cloud is designed to support a single interactive Identity Provider (IdP) per tenant.
This approach enhances security, governance, and operational control while simplifying authentication management. Organizations that require multiple identity sources can achieve this by using a federated IdP (such as Azure Entra, Auth0, Keycloak, or Okta) to consolidate authentication and seamlessly connect it to Qlik Cloud
Qlik Cloud allows organizations to configure an interactive IdP to manage user authentication. Options include:
Any unauthenticated user attempting to access the tenant is redirected to the configured interactive IdP for authentication, ensuring a streamlined and secure login experience.
Using a single interactive IdP is a best practice for identity management and ensures consistency, security, and simplified administration.
Key reasons include:
User Identity Consistency: Qlik Cloud relies on a user's subject and email as unique identifiers. Managing a single interactive IdP helps prevent duplicate identities and ensures seamless user access, reducing risk of users gaining unauthorized access to sensitive data or permissions.
Streamlined Identity & Access Management: Since Qlik Cloud does not transform incoming claims beyond remapping keys, keeping authentication centralized prevents unintended variances in usernames, email formats, or group names. This improves security and reduces maintenance of licenses and entitlements.
Optimized Group Management: A single interactive IdP provides a consistent structure for groups, ensuring they align with an organization’s access policies. By managing group filtering in one place, organizations can maintain clear and structured permissions. Managing groups across multiple IdPs can quickly become unmanageable, leading to inconsistencies in user access.
Simplified Access Control: Groups in Qlik Cloud are referenced by name, making it more efficient to manage access through a single federated IdP rather than multiple sources.
Efficient Token Management: A unified IdP helps maintain consistency in authentication tokens, reducing administrative overhead and ensuring a smooth user experience.
Enhanced Security & Auditability: By centralizing authentication through a single IdP, organizations can apply security controls, enforce device policies, and monitor user access through audit logs.
A federated IdP ensures that organizations retain full control over authentication policies, while providing a seamless experience for users accessing Qlik Cloud.
Many organizations choose to use a federated identity provider to streamline identity management, enhance security, and improve user experience across multiple applications. Benefits include:
Centralized User Lifecycle Management: Users from different sources can be managed in a single system, reducing duplication and inconsistencies.
Improved Security Policies: Organizations can enforce multi-factor authentication (MFA), conditional access policies, and device trust settings at the IdP level.
Single Sign-On (SSO) Across Applications: Users authenticate once and gain seamless access to multiple platforms, including Qlik Cloud.
Comprehensive Logging & Compliance: A federated IdP provides consolidated audit trails and governance controls for user authentication.
By implementing a federated identity provider, organizations can maintain flexibility in their authentication strategy while ensuring compatibility with Qlik Cloud.
The recommended approach for organizations that need to authenticate users across multiple identity sources is to configure a federated IdP that consolidates authentication. Solutions like Azure Entra ID or Okta can be used to unify identity management and connect to Qlik Cloud via OIDC or SAML.
Set Up a Federated IdP (Azure Entra ID, Okta, or another identity management solution).
Sync Identity Sources within the federated IdP to ensure unique identities across different user groups.
Configure OIDC/SAML Authentication in Qlik Cloud with the federated IdP.
This approach ensures a secure, efficient, and scalable authentication strategy that aligns with best practices for enterprise identity management.
Qlik Cloud is designed to integrate seamlessly with a single interactive IdP, providing a robust and secure authentication framework. Organizations that need to consolidate multiple identity sources can achieve this through a federated IdP, ensuring centralized management, improved security, and a streamlined user experience. By leveraging enterprise-grade IdPs like Azure Entra ID or Okta, organizations can enhance their identity management strategy while maintaining full control over authentication policies and governance.
Environment
When using SAML or ticket authentication in Qlik Sense, some users belonging to a big number of groups see the error 431 Request header fields too large on the hub and cannot proceed further.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
The default setting will still be a header size of 8192 bytes. The fix adds support for a configurable MaxHttpHeaderSize.
Steps:
[globals]
LogPath="${ALLUSERSPROFILE}\Qlik\Sense\Log"
(...)
MaxHttpHeaderSize=16384
Note: The above value (16384) is an example. You may need to put more depending on the total number of characters of all the AD groups to which the user belongs. The max value is 65534.
Qlik Sense Enterprise on Windows
Can deleted Qlik Cloud apps or sheets be recovered? Can apps, sheets, or automations owned by a deleted user be recovered?
Once an app or sheet has been deleted it cannot be recovered.
What about deleted users and their orphaned objects?
Orphaned apps (created when the user has been deleted) can be reassigned. See Qlik Cloud Analytics: Can the owner of orphaned Apps be changed after the owner is deleted? for instructions.
For more information about what to consider before deleting a user, see Deleting users.
The following cannot be recovered:
Qlik suggests the following techniques to ensure no work is lost:
Qlik has received feedback in the past to implement a recovery feature: Recycle bin for SaaS editions of Qlik Sense (#279017) (Log in to Qlik Ideation to leave comments and vote.)
Content
The environment being demonstrated in this article consists of one Central Node and Two Worker Nodes. Worker 1 is a Consumption node where both Development and Production apps are allowed. Worker 2 is a dedicated Scheduler Worker node where all reloads will be directed. Central Node is acting as a Scheduler Manager.
The Zabbix Monitoring appliance can be downloaded and configured in a number of ways, including direct install on a Linux server, OVF templates and self-hosting via Docker or Kubernetes. In this example we will be using Docker. We assume you have a working docker engine running on a server or your local machine. Docker Desktop is a great way to experiment with these images and evaluate whether Zabbix fits in your organisation.
This will include all necessary files to get started, including docker compose stack definitions supporting different base images, features and databases, such as MySQL or PostgreSQL. In our example, we will invoke one of the existing Docker compose files which will use PostgreSQL as our database engine.
Source: https://www.zabbix.com/documentation/current/en/manual/installation/containers#docker-compose
git clone https://github.com/zabbix/zabbix-docker.git
Here you can modify environment variables as needed, to change things like the Stack / Composition name, default ports and many other settings supported by Zabbix.
cd ./zabbix-docker/env_vars
ls -la #to list all hidden files (.dotfiles)
nano .env_web
In this file, we will change the value for ZBX_SERVER_NAME
to something else, like "Qlik STT - Monitoring". Save the changes and we are ready to start up Zabbix Server.
./zabbix-docker folder contains many different docker compose templates, either using public images or locally built (latest and local tags).
You can run your chosen base image and database version with:
docker compose -f compose-file.yaml up -d && docker compose logs -f --since 1m
Or unlink and re-create the symbolic link to compose.yaml, which enables managing the stack without specifying a compose file. Run the following commands inside the zabbix-docker
folder to use the latest Ubuntu-based image with PostgreSQL database:
unlink compose.yaml
ln -s ./docker-compose_v3_ubuntu_pgsql_latest.yaml compose.yaml
docker compose up -d
If you skip the -d
flag, the Docker stack will start and your command line will be connected to the log output for all containers. The stack will stop if you exit this mode with CTRL+C or by closing the terminal session. Detached mode will run the stack in background. You can still connect to the live log output, pull logs from history, manage the stack state or tear it down using docker compose down
.
Pro tip: you will be using docker compose
commands often when working with Docker. You can create an alias in most shells to a short-hand, such as "dc = docker compose". This will still accept all following verbs, such as start|stop|restart|up|down|logs
and all following flags. docker compose up -d && docker compose logs -f --since 1m
would become dc up -d && dc logs -f --since 1m
.
Use the IP address of your Docker host: http://IPADDRESS or https://IPADDRESS.
The Zabbix server stack can be hosted behind a Reverse Proxy.
The default username is Admin
and the default password is zabbix
. They are case sensitive.
Download link: https://www.zabbix.com/download_agents, in this case download the Windows installer MSI.
After Agent is installed, in Zabbix go to Data Collection > Hosts and click on Create host in the top right-hand corner. Provide details like hostname and port to connect to the Agent, a display name and adjust any other parameters. You can join clusters with Host groups. This makes navigating Zabbix easier.
Note: Remember to change how Zabbix Server will connect to the Agent on this node, either with IP address or DNS. Note that the default IP address points to the Zabbix Server.
In the Zabbix Web GUI, navigate to Data Collection > Templates and click on the Import button in the top right-hand corner. You can find the templates file at the following download link:
LINK to zabbix templates
Once you have added all your hosts to the Data Collection section, we can link all Qlik Sense servers in a cluster using the same templates. Zabbix will automatically populate metrics where these performance counters are found. From Data Collection > Hosts, select all your Qlik Sense servers and click on "Mass update". In the dialog that comes up, select the "Link templates" checkbox. Here you can link/replace/unlink templates across many servers in bulk.
Select "Link" and click on the "Select" button. This new panel will let us search for Template groups and make linking a bit easier. The Template Group we provided contains 4 individual templates.
Fig 2: Mass update panel
Fig 3: Search for Template Group
Once you Select and Update on the main panel, all selected Hosts will receive all items contained in the templates, and populate all graphs and Dashboards automatically.
To review your data, navigate to Monitoring > Hosts and click on the "Dashboards" or "Graphs" link for any node, here is the default view when all Qlik Sense templates are linked to a node:
Fig 5: Repository Service metrics - Example
We will query the Engine Healthcheck end-point on QlikServer3 (our consumer node) and extract usage metrics from by parsing the JSON output.
We will be using a new Anonymous Access Virtual Proxy set up on each node. This Virtual Proxy will only Balance on the node it represents, to ensure we extract meaningful metrics from the Engine and we won't be load-balanced by the Proxy service across multiple nodes. There won't be a way to determine which node is responding, without looking at DevTools in your browser. You can also use Header or Certificate authentication in the HTTP Agent configuration.
Once the Virtual Proxy is configured with Anonymous Only access, we can use this new prefix to configure our HTTP Agent in Zabbix.
In the Zabbix web GUI, go to Data collection > Hosts. Click on any of your hosts. On tabs at the top of the pop-up, click on Macros and click on the "Inherited and host macros" button. Once the list has loaded, search for the following Macro: {$VP_PREFIX}. This is set by default to "anon". Click on "Change" and set Macro value to your custom Virtual Proxy Prefix for Engine diagnostics, and click Update. The Virtual Proxy prefix will have to be changed on each node for the "Engine Performance via HTTP Agent" item to work. Alterantively, you can modify the MACRO value for the Template, this will replicate the changes across all nodes associated to this Template.
Fig 6: Changing Host Macros from Inherited values
To make this change at the Template level, go to Data collection > Templates. Search for the "Engine Performance via HTTP Agent" and click on the Template. Navigate to the Macros tab in the pop-up and add your Virtual Proxy Prefix here to make this the new default for your environment. No further changes to Node configuration are required at this point.
Fig 7: Changing Macros at the Template level
The Zabbix templates provided in this article contain the following Engine metric JSONParsers:
These are the same performance counters that you can see in the Engine Health section in QMC.
Stay tuned to new releases of the Monitoring Templates. Feel free to customise these to your needs and share with the Community.
Environment
Some script logs are not transferred to Archive log Folder.
Storage Locations:
Qlik Sense log folder:
C:\ProgramData\Qlik\Sense\Log\Script
Log names consist out of the AppID and a year, date, and timestamp.
Archived log Folder:
\\SERVICECLUSTER-SHARE\ArchivedLogs\NodeName\Script
Log names consist out of the AppID and a year, date, and timestamp.
By design, not all script logs are transferred to the Archived logs folder. Reloads executed from the hub are not transferred.
1. When reloading data in app via hub, Script log will be left in Script log folder C:\ProgramData\Qlik\Sense\Log\Script
2. When reloading data via QMC/task, Script log will be transferred to Archived logs folder \\SERVICECLUSTER-SHARE\ArchivedLogs\NodeName\Script
Question
When using the unattended installer to install Talend on-prem products, such as TAC (Talend Administration Center), there is no visible log. Where can you find the log, and how can you determine if the products have been successfully installed?
The installation log is located in /tmp for Linux and %TEMP% for Windows. The file name will be "installbuilderinstaller.log". To locate the correct install log, please refer to the timestamp at the end, as multiple installation logs may exist in the temp directory.
To confirm successful installation, it is best to inspect the installation folder. Upon successful completion, a file named uninstall (for Linux) or uninstall.exe (for Windows) will be generated at the root of the installation folder. With default settings, the file will be located at /opt/Talend-8.0.1/uninstall or C:\Talend\8.0.1\uninstall.exe.
It is not possible to set up a Microsoft Office 365 email provider with OAuth 2.0 authentication.
The HAR file shows this message in Network:
{connectionFailed: true, message: "Error during email request", success: false}
connectionFailed: true
message: "Error during email request"
success: false
Configure the Mail.Send permission as it is described in Configuring a Microsoft 365 email provider using OAuth2.
This problem occurs when the Mail.Send permission has not been configured in the app registration.
More information about the Mail.Send permission can be found in Application permission to Microsoft Graph (learn.microsoft.com).
Information about app and storage size for Qlik Cloud for Qlik Sense Enterprise SaaS and Qlik Sense Business can be found in Qlik Sense capacity.
Note: An app on disk is typically 4-6 times bigger in memory, although this is a rule of thumb and exceptions can and do occur, and an app within the "on-disk" size limit may balloon past the "in memory" limit.
App Objects have a memory limit of 10 GB, except when run on dedicated capacity.
If you are looking to compare other features between Qlik Sense Business and Qlik Sense Enterprise SaaS, see Compare product features
For more information about capacity, see:
Qlik Sense specifications and capacity
Pricing | Qlik Sense
This can help in the troubleshooting process to find out what App/User was active and accessed an application right before an error/issue in the environment.
The entries are color coded for the name of the App in the Hub/QMC along with the APP ID and Session ID
Note: Session ID will change for different users and different browsers, this includes closing the browser and reopening it (which is a new session, but will be the same App ID). In this test there's only one session and two accesses of the app, but a user can open the app in another browser which will show it as the same user in two different sessions for the same app at the same time.
Environment:
Qlik Sense Enterprise on Windows , version 3.2.1 and newer, single node
Windows 2012 R2
App Name: Load Test APP ID
Published APP ID?: c3ed4e49-dc98-4ff8-b140-f3d945236197 - Load Test APP ID
Session ID?: fe87b122-8979-45d4-9c37-3c39ce704045 - Load Test APP ID
C:\ProgramData\Qlik\Sense\Log\Engine\Trace\QLIKSERVER1_System_Engine.txt
First Open (loaded into memory):
16 20170321T215743.565+0100 INFO QlikServer1 System.Engine.Engine 46 6f40f67b-47aa-406f-bb78-947e90cd9ff3 DOMAIN\qvservice QvSocket: Connected to server as user: 'UserDirectory=DOMAIN; UserId=qvservice' for document: '/app/c3ed4e49-dc98-4ff8-b140-f3d945236197' on socket fe80::743e:6e47:b9f:acb5:4747 <-> fe80::743e:6e47:b9f:acb5:62282 0 Internal Engine 20170321T215743.566+0100 3136 3176 20170321T215516.000+0100 6f40f67b-47aa-406f-bb78-947e90cd9ff3
17 20170321T215743.573+0100 INFO QlikServer1 System.Engine.Engine 46 31b65464-36b3-48c1-8dc8-f7b5c2ecbb9b DOMAIN\qvservice Server: Document Load: Beginning open of document fe87b122-8979-45d4-9c37-3c39ce704045 DOMAIN qvservice 20170321T215743.573+0100 3132 3176 20170321T215516.000+0100 31b65464-36b3-48c1-8dc8-f7b5c2ecbb9b
18 20170321T215743.597+0100 INFO QlikServer1 System.Engine.Engine 46 610de372-ad42-4975-863b-93f5d9fbcbec DOMAIN\qvservice DOC loading: Beginning load of document C3ED4E49-DC98-4FF8-B140-F3D945236197. fe87b122-8979-45d4-9c37-3c39ce704045 DOMAIN qvservice 20170321T215743.597+0100 3756 3176 20170321T215516.000+0100 610de372-ad42-4975-863b-93f5d9fbcbec
19 20170321T215744.640+0100 INFO QlikServer1 System.Engine.Engine 46 f15e84d0-f8d9-44d1-a8e1-3095b41e4a0d DOMAIN\qvservice Document Load: The document c3ed4e49-dc98-4ff8-b140-f3d945236197 was loaded. fe87b122-8979-45d4-9c37-3c39ce704045 DOMAIN qvservice 20170321T215744.641+0100 4468 3176 20170321T215516.000+0100 f15e84d0-f8d9-44d1-a8e1-3095b41e4a0d
Second Open (already loaded in memory):
20 20170321T220052.937+0100 INFO QlikServer1 System.Engine.Engine 47 34b26c68-67e8-4be1-9439-6026c0c59f9e DOMAIN\qvservice QvSocket: Connected to server as user: 'UserDirectory=DOMAIN; UserId=qvservice' for document: '/app/c3ed4e49-dc98-4ff8-b140-f3d945236197' on socket fe80::743e:6e47:b9f:acb5:4747 <-> fe80::743e:6e47:b9f:acb5:62296 0 Internal Engine 20170321T220052.937+0100 3136 3176 20170321T215516.000+0100 34b26c68-67e8-4be1-9439-6026c0c59f9e
21 20170321T220052.960+0100 INFO QlikServer1 System.Engine.Engine 47 cc981161-bbfd-42ec-aa5c-b512e0602283 DOMAIN\qvservice Server: Document Load: Beginning open of document fe87b122-8979-45d4-9c37-3c39ce704045 DOMAIN qvservice 20170321T220052.960+0100 3132 3176 20170321T215516.000+0100 cc981161-bbfd-42ec-aa5c-b512e0602283
Notice the first load will state what Session ID initiated the load into memory. The second attempt only gives the App ID in the first log line and then the Session ID is in an entry afterwards. While these entries should be right after each other, depending on what’s happening, they might be broken up by other entries. It’s unlikely, but does make it more difficult to troubleshoot a heavily trafficked environment without access to the Proxy logs.
Note: The First Open is only triggered when the QVF is not in memory, so the initial load in memory might be in another log file depending on when the log files are archived.
C:\ProgramData\Qlik\Sense\Log\Proxy\Audit\QLIKSERVER1_AuditSecurity_Proxy.txt
First Open (loaded into memory):
3 11.11.0.0 20170321T215732.019+0100 QlikServer1 738f0e4c-85bc-44ab-accb-18825ec407a5 Command=Login;Result=0;ResultText=Success fe87b122-8979-45d4-9c37-3c39ce704045 c8d0eb48-fe7a-47da-bbca-0eab11340b92 0 DOMAIN qvservice 0 Not available Security ::ffff:172.16.16.100 Proxy AppAccess /hub/stream/e4f1e040-75cf-4db0-afbd-baaf90cab8f8 Login 0 User authenticated. User 'DOMAIN\qvservice' used authentication method 'ticket' and got session 'fe87b122-8979-45d4-9c37-3c39ce704045' 1dc02c31e1779346144cbcc4b75a81a55f466159
4 11.11.0.0 20170321T215742.970+0100 QlikServer1 97ab24cb-edb8-4930-a343-69e044ec7dd0 Command=Add app privileges;Result=0;ResultText=Success fe87b122-8979-45d4-9c37-3c39ce704045 0e6278ad-ffff-4d6c-9d66-af84a1596d78 0 DOMAIN qvservice c3ed4e49-dc98-4ff8-b140-f3d945236197 Not available Security ::ffff:172.16.16.100 Proxy AppAccess /app/c3ed4e49-dc98-4ff8-b140-f3d945236197?reloaduri=http%3a%2f%2fqlikserver1.domain.local%2fsense%2fapp%2fc3ed4e49-dc98-4ff8-b140-f3d945236197 Add app privileges:ProcessRequestAccessResult 0 Access to app 'c3ed4e49-dc98-4ff8-b140-f3d945236197' allowed with access type 'UserAccessType', result code 'Ok' 94d89f25a8faad2ea59837ded7b6a1603a4607da
Second Open (already loaded in memory):
20 20170321T220052.937+0100 INFO QlikServer1 System.Engine.Engine 47 34b26c68-67e8-4be1-9439-6026c0c59f9e DOMAIN\qvservice QvSocket: Connected to server as user: 'UserDirectory=DOMAIN; UserId=qvservice' for document: '/app/c3ed4e49-dc98-4ff8-b140-f3d945236197' on socket fe80::743e:6e47:b9f:acb5:4747 <-> fe80::743e:6e47:b9f:acb5:62296 0 Internal Engine 20170321T220052.937+0100 3136 3176 20170321T215516.000+0100 34b26c68-67e8-4be1-9439-6026c0c59f9e
21 20170321T220052.960+0100 INFO QlikServer1 System.Engine.Engine 47 cc981161-bbfd-42ec-aa5c-b512e0602283 DOMAIN\qvservice Server: Document Load: Beginning open of document fe87b122-8979-45d4-9c37-3c39ce704045 DOMAIN qvservice 20170321T220052.960+0100 3132 3176 20170321T215516.000+0100 cc981161-bbfd-42ec-aa5c-b512e0602283
As you can see the first load will state what Session ID initiated the load into memory. The second attempt only gives the App ID in the first log line and then the Session ID is in an entry afterwards. While these entries should be right after each other, depending on what’s happening, they might be broken up by other entries. It’s unlikely, but does make it more difficult to troubleshoot a heavily trafficked environment without access to the Proxy logs.
Note: The First Open is only triggered when the QVF is not in memory, so the initial load in memory might be in another log file depending on when the log files are archived.
C:\ProgramData\Qlik\Sense\Log\Proxy\Audit\QLIKSERVER1_AuditSecurity_Proxy.txt
First Open (loaded into memory):
3 11.11.0.0 20170321T215732.019+0100 QlikServer1 738f0e4c-85bc-44ab-accb-18825ec407a5 Command=Login;Result=0;ResultText=Success fe87b122-8979-45d4-9c37-3c39ce704045 c8d0eb48-fe7a-47da-bbca-0eab11340b92 0 DOMAIN qvservice 0 Not available Security ::ffff:172.16.16.100 Proxy AppAccess /hub/stream/e4f1e040-75cf-4db0-afbd-baaf90cab8f8 Login 0 User authenticated. User 'DOMAIN\qvservice' used authentication method 'ticket' and got session 'fe87b122-8979-45d4-9c37-3c39ce704045' 1dc02c31e1779346144cbcc4b75a81a55f466159
4 11.11.0.0 20170321T215742.970+0100 QlikServer1 97ab24cb-edb8-4930-a343-69e044ec7dd0 Command=Add app privileges;Result=0;ResultText=Success fe87b122-8979-45d4-9c37-3c39ce704045 0e6278ad-ffff-4d6c-9d66-af84a1596d78 0 DOMAIN qvservice c3ed4e49-dc98-4ff8-b140-f3d945236197 Not available Security ::ffff:172.16.16.100 Proxy AppAccess /app/c3ed4e49-dc98-4ff8-b140-f3d945236197?reloaduri=http%3a%2f%2fqlikserver1.domain.local%2fsense%2fapp%2fc3ed4e49-dc98-4ff8-b140-f3d945236197 Add app privileges:ProcessRequestAccessResult 0 Access to app 'c3ed4e49-dc98-4ff8-b140-f3d945236197' allowed with access type 'UserAccessType', result code 'Ok' 94d89f25a8faad2ea59837ded7b6a1603a4607da
Second Open (already loaded in memory):
5 11.11.0.0 20170321T220052.910+0100 QlikServer1 881a8f23-f3a2-4f2b-ae45-969dc9c0c2e3 Command=Add app privileges;Result=0;ResultText=Success fe87b122-8979-45d4-9c37-3c39ce704045 a057e326-6123-406a-8ee2-e82a46efe367 0 DOMAIN qvservice c3ed4e49-dc98-4ff8-b140-f3d945236197 Not available Security ::ffff:172.16.16.100 Proxy AppAccess /app/c3ed4e49-dc98-4ff8-b140-f3d945236197?reloaduri=http%3a%2f%2fqlikserver1.domain.local%2fsense%2fapp%2fc3ed4e49-dc98-4ff8-b140-f3d945236197 Add app privileges:ProcessRequestAccessResult 0 Access to app 'c3ed4e49-dc98-4ff8-b140-f3d945236197' allowed with access type 'UserAccessType', result code 'Ok' 0649a45ad43ccfcf97b82a1cd8d14dd2fa0cfe3b
6 11.11.0.0 20170321T220553.102+0100 QlikServer1 a5462a59-cf10-494e-b021-fc1c26bad82c Command=Add app privileges;Result=0;ResultText=Success fe87b122-8979-45d4-9c37-3c39ce704045 a057e326-6123-406a-8ee2-e82a46efe367 0 DOMAIN qvservice c3ed4e49-dc98-4ff8-b140-f3d945236197 Not available Security ::ffff:172.16.16.100 Proxy AppAccess /app/c3ed4e49-dc98-4ff8-b140-f3d945236197?reloaduri=http%3a%2f%2fqlikserver1.domain.local%2fsense%2fapp%2fc3ed4e49-dc98-4ff8-b140-f3d945236197 Add app privileges:ProcessRequestAccessResult 0 Access to app 'c3ed4e49-dc98-4ff8-b140-f3d945236197' allowed with access type 'UserAccessType', result code 'Ok' 11c238cbc3edf8e921ebf0e0d8f71b199497a880
There’s a few logs listed here that will give other information along with the Session and App ID, but with the Proxy logs we can verify the Session ID that links to the App ID. This can point you to a time, user and app accessed giving you an idea of what users/apps were active and open at the time.
C:\ProgramData\Qlik\Sense\Log\Engine\Audit\QLIKSERVER1_AuditActivity_Engine.txt
First Open (loaded into memory):
1 12.16.1.0 20170321T215745.019+0100 QlikServer1 48345a91-0cef-4f54-adce-e1137ab27acd 20170321T215745.007+0100 12.2.2.0 Command=Open app;Result=0;ResultText=Success fe87b122-8979-45d4-9c37-3c39ce704045 0e6278ad-ffff-4d6c-9d66-af84a1596d78 1 DOMAIN qvservice c3ed4e49-dc98-4ff8-b140-f3d945236197 Not available Engine Not available Global::OpenApp Open app 0 Success 48345a91-0cef-4f54-adce-e1137ab27acd
Second Open (already loaded in memory):
2 12.16.1.0 20170321T220053.193+0100 QlikServer1 ea7e3197-3651-4a87-9c56-a4d51872f01f 20170321T220053.193+0100 12.2.2.0 Command=Open app;Result=0;ResultText=Success fe87b122-8979-45d4-9c37-3c39ce704045 a057e326-6123-406a-8ee2-e82a46efe367 1 DOMAIN qvservice c3ed4e49-dc98-4ff8-b140-f3d945236197 Not available Engine Not available Global::OpenApp Open app 0 Success ea7e3197-3651-4a87-9c56-a4d51872f01f
This log gives the information on the app being opened with its Session ID.
C:\ProgramData\Qlik\Sense\Log\Engine\Trace\QLIKSERVER1_Session_Engine.txt
First Open (Closed browser tab):
1 20170321T220016.240+0100 INFO QlikServer1 Session.Engine.Engine 53 6f637615-237c-4716-87d1-4a3f06ed81be DOMAIN\qvservice fe87b122-8979-45d4-9c37-3c39ce704045 DOMAIN qvservice 20170321T220016.238+0100 3132 3176 12.2.50504.0409.10 20170321T215516.000+0100 0e6278ad-ffff-4d6c-9d66-af84a1596d78 c3ed4e49-dc98-4ff8-b140-f3d945236197 Load Test APP ID 20170320T195542.497Z Socket closed by client 20170321T215743.000+0100 0.001771 0.006285 1902 5619 13 0 UserDirectory=DOMAIN; UserId=qvservice Off 6f637615-237c-4716-87d1-4a3f06ed81be
This has one entry for when tab in the browser was closed (but maintained the actual Session ID since the browser was still active) and you can see by the time stamp of 20170321T220016 is before the Second Open of 20170321T220053. If you had closed the Second Open session you would have a similar entry.
C:\ProgramData\Qlik\Sense\Log\Proxy\Audit\QLIKSERVER1_AuditActivity_Proxy.txt
First Open (No Session established until now):
1 11.11.0.0 20170321T215731.469+0100 QlikServer1 201ee217-3713-44cb-9277-d374799a84fb Command=Get ticket;Result=0;ResultText=Success 0 0 0 INTERNAL sa_proxy 0 Not available Proxy Not available /hub/stream/e4f1e040-75cf-4db0-afbd-baaf90cab8f8 Get ticket:ConsumeIncomingTicket 0 User claimed ticket: 'ewXLiJDk6xM1Q0TH'. Session will be etablished 201ee217-3713-44cb-9277-d374799a84fb
2 11.11.0.0 20170321T215731.512+0100 QlikServer1 7042d186-3f1d-4759-9d87-663b73522766 Command=Start session;Result=0;ResultText=Success fe87b122-8979-45d4-9c37-3c39ce704045 0 0 DOMAIN qvservice 0 DOMAIN\qvservice Proxy Not available /qps/sessionstart/qvservice Start session 0 Start session for user: 'DOMAIN\qvservice' 7042d186-3f1d-4759-9d87-663b73522766
3 11.11.0.0 20170321T215731.892+0100 QlikServer1 2e9f0d03-aabe-403a-b282-ec01a47283a0 Command=Add session;Result=0;ResultText=Success fe87b122-8979-45d4-9c37-3c39ce704045 c8d0eb48-fe7a-47da-bbca-0eab11340b92 0 DOMAIN qvservice 0 Not available Proxy Not available /hub/stream/e4f1e040-75cf-4db0-afbd-baaf90cab8f8 Add session:NotifyOfPostSession 0 Syncing user attributes for user 'DOMAIN\qvservice' 2e9f0d03-aabe-403a-b282-ec01a47283a0
4 11.11.0.0 20170321T215736.774+0100 QlikServer1 4f3ec3ce-604b-4fad-930c-6c5d3c417112 Command=Get session;Result=200;ResultText=Success fe87b122-8979-45d4-9c37-3c39ce704045 bcce5b5b-a2e5-455c-9a36-f3fe076b03e3 0 DOMAIN qvservice 0 Not available Proxy AppAccess /qps/user?targeturi=http:%2f%2fqlikserver1.domain.local%2fhub%2f Get session:HandleRequest 200 Handle connection request 4f3ec3ce-604b-4fad-930c-6c5d3c417112
5 11.11.0.0 20170321T215743.567+0100 QlikServer1 1fcb5c5c-15f5-40a6-bd54-0819643cd197 Command=Open connection;Result=0;ResultText=Success fe87b122-8979-45d4-9c37-3c39ce704045 0e6278ad-ffff-4d6c-9d66-af84a1596d78 0 DOMAIN qvservice c3ed4e49-dc98-4ff8-b140-f3d945236197 Not available Proxy AppAccess /app/c3ed4e49-dc98-4ff8-b140-f3d945236197?reloaduri=http%3a%2f%2fqlikserver1.domain.local%2fsense%2fapp%2fc3ed4e49-dc98-4ff8-b140-f3d945236197 Open connection 0 Backend web socket connection Opened for session 'fe87b122-8979-45d4-9c37-3c39ce704045'. App id: 'c3ed4e49-dc98-4ff8-b140-f3d945236197'. UserAgent: 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36' 1fcb5c5c-15f5-40a6-bd54-0819643cd197
First Open (Closed tab)
6 11.11.0.0 20170321T220016.242+0100 QlikServer1 12a1b6ff-d000-4f56-a58c-8ca1bd36a567 Command=Close connection;Result=0;ResultText=Success fe87b122-8979-45d4-9c37-3c39ce704045 0e6278ad-ffff-4d6c-9d66-af84a1596d78 0 DOMAIN qvservice c3ed4e49-dc98-4ff8-b140-f3d945236197 Not available Proxy AppAccess /app/c3ed4e49-dc98-4ff8-b140-f3d945236197?reloaduri=http%3a%2f%2fqlikserver1.domain.local%2fsense%2fapp%2fc3ed4e49-dc98-4ff8-b140-f3d945236197 Close connection 0 Backend web socket connection Closed for session 'fe87b122-8979-45d4-9c37-3c39ce704045'. App id: 'c3ed4e49-dc98-4ff8-b140-f3d945236197. UserAgent: 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36' 12a1b6ff-d000-4f56-a58c-8ca1bd36a567
Second Open:
7 11.11.0.0 20170321T220052.937+0100 QlikServer1 f1f1350f-0b00-4cf1-b30c-51ddbfecb6bb Command=Open connection;Result=0;ResultText=Success fe87b122-8979-45d4-9c37-3c39ce704045 a057e326-6123-406a-8ee2-e82a46efe367 0 DOMAIN qvservice c3ed4e49-dc98-4ff8-b140-f3d945236197 Not available Proxy AppAccess /app/c3ed4e49-dc98-4ff8-b140-f3d945236197?reloaduri=http%3a%2f%2fqlikserver1.domain.local%2fsense%2fapp%2fc3ed4e49-dc98-4ff8-b140-f3d945236197 Open connection 0 Backend web socket connection Opened for session 'fe87b122-8979-45d4-9c37-3c39ce704045'. App id: 'c3ed4e49-dc98-4ff8-b140-f3d945236197'. UserAgent: 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36' f1f1350f-0b00-4cf1-b30c-51ddbfecb6bb
Notice in this log the user gets a Ticket and Session in the first open to kick off the Session ID. You can then see that web socket connection is closed to the application for that session and then reopened right after it.
Note: The Syncing of attributes in the First Open that gets the ticket, it only does that the start of the session and will not be triggered until a new session is established.
C:\ProgramData\Qlik\Sense\Log\Proxy\Audit\QLIKSERVER1_AuditActivity_Proxy.txt
First Open (No Session established until now):
1 11.11.0.0 20170321T215731.469+0100 QlikServer1 201ee217-3713-44cb-9277-d374799a84fb Command=Get ticket;Result=0;ResultText=Success 0 0 0 INTERNAL sa_proxy 0 Not available Proxy Not available /hub/stream/e4f1e040-75cf-4db0-afbd-baaf90cab8f8 Get ticket:ConsumeIncomingTicket 0 User claimed ticket: 'ewXLiJDk6xM1Q0TH'. Session will be etablished 201ee217-3713-44cb-9277-d374799a84fb
2 11.11.0.0 20170321T215731.512+0100 QlikServer1 7042d186-3f1d-4759-9d87-663b73522766 Command=Start session;Result=0;ResultText=Success fe87b122-8979-45d4-9c37-3c39ce704045 0 0 DOMAIN qvservice 0 DOMAIN\qvservice Proxy Not available /qps/sessionstart/qvservice Start session 0 Start session for user: 'DOMAIN\qvservice' 7042d186-3f1d-4759-9d87-663b73522766
3 11.11.0.0 20170321T215731.892+0100 QlikServer1 2e9f0d03-aabe-403a-b282-ec01a47283a0 Command=Add session;Result=0;ResultText=Success fe87b122-8979-45d4-9c37-3c39ce704045 c8d0eb48-fe7a-47da-bbca-0eab11340b92 0 DOMAIN qvservice 0 Not available Proxy Not available /hub/stream/e4f1e040-75cf-4db0-afbd-baaf90cab8f8 Add session:NotifyOfPostSession 0 Syncing user attributes for user 'DOMAIN\qvservice' 2e9f0d03-aabe-403a-b282-ec01a47283a0
4 11.11.0.0 20170321T215736.774+0100 QlikServer1 4f3ec3ce-604b-4fad-930c-6c5d3c417112 Command=Get session;Result=200;ResultText=Success fe87b122-8979-45d4-9c37-3c39ce704045 bcce5b5b-a2e5-455c-9a36-f3fe076b03e3 0 DOMAIN qvservice 0 Not available Proxy AppAccess /qps/user?targeturi=http:%2f%2fqlikserver1.domain.local%2fhub%2f Get session:HandleRequest 200 Handle connection request 4f3ec3ce-604b-4fad-930c-6c5d3c417112
5 11.11.0.0 20170321T215743.567+0100 QlikServer1 1fcb5c5c-15f5-40a6-bd54-0819643cd197 Command=Open connection;Result=0;ResultText=Success fe87b122-8979-45d4-9c37-3c39ce704045 0e6278ad-ffff-4d6c-9d66-af84a1596d78 0 DOMAIN qvservice c3ed4e49-dc98-4ff8-b140-f3d945236197 Not available Proxy AppAccess /app/c3ed4e49-dc98-4ff8-b140-f3d945236197?reloaduri=http%3a%2f%2fqlikserver1.domain.local%2fsense%2fapp%2fc3ed4e49-dc98-4ff8-b140-f3d945236197 Open connection 0 Backend web socket connection Opened for session 'fe87b122-8979-45d4-9c37-3c39ce704045'. App id: 'c3ed4e49-dc98-4ff8-b140-f3d945236197'. UserAgent: 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36' 1fcb5c5c-15f5-40a6-bd54-0819643cd197
First Open (Closed tab)
6 11.11.0.0 20170321T220016.242+0100 QlikServer1 12a1b6ff-d000-4f56-a58c-8ca1bd36a567 Command=Close connection;Result=0;ResultText=Success fe87b122-8979-45d4-9c37-3c39ce704045 0e6278ad-ffff-4d6c-9d66-af84a1596d78 0 DOMAIN qvservice c3ed4e49-dc98-4ff8-b140-f3d945236197 Not available Proxy AppAccess /app/c3ed4e49-dc98-4ff8-b140-f3d945236197?reloaduri=http%3a%2f%2fqlikserver1.domain.local%2fsense%2fapp%2fc3ed4e49-dc98-4ff8-b140-f3d945236197 Close connection 0 Backend web socket connection Closed for session 'fe87b122-8979-45d4-9c37-3c39ce704045'. App id: 'c3ed4e49-dc98-4ff8-b140-f3d945236197. UserAgent: 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36' 12a1b6ff-d000-4f56-a58c-8ca1bd36a567
Second Open:
7 11.11.0.0 20170321T220052.937+0100 QlikServer1 f1f1350f-0b00-4cf1-b30c-51ddbfecb6bb Command=Open connection;Result=0;ResultText=Success fe87b122-8979-45d4-9c37-3c39ce704045 a057e326-6123-406a-8ee2-e82a46efe367 0 DOMAIN qvservice c3ed4e49-dc98-4ff8-b140-f3d945236197 Not available Proxy AppAccess /app/c3ed4e49-dc98-4ff8-b140-f3d945236197?reloaduri=http%3a%2f%2fqlikserver1.domain.local%2fsense%2fapp%2fc3ed4e49-dc98-4ff8-b140-f3d945236197 Open connection 0 Backend web socket connection Opened for session 'fe87b122-8979-45d4-9c37-3c39ce704045'. App id: 'c3ed4e49-dc98-4ff8-b140-f3d945236197'. UserAgent: 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36' f1f1350f-0b00-4cf1-b30c-51ddbfecb6bb
Notice in this log the user gets a Ticket and Session in the first open to kick off the Session ID. You can then see that web socket connection is closed to the application for that session and then reopened right after it.
Note: The Syncing of attributes in the First Open that gets the ticket, it only does that the start of the session and will not be triggered until a new session is established.
C:\ProgramData\Qlik\Sense\Log\Proxy\Trace\QLIKSERVER1_Audit_Proxy.txt
First Open (loaded into memory):
1 20170321T215731.134+0100 INFO QlikServer1 Audit.Proxy.Proxy.SessionEstablishment.Authentication.AuthenticationHandler 14 384a5be2-8c1f-4569-8b17-a6fa747b8f2f DOMAIN\qvservice Session 'a0ee67d2-581b-47db-9514-443ac4a008b1' is invalid (possibly timed out or logged out) 0 45b4435a-3e05-47c4-b78b-0e5311154a91 ::ffff:172.16.16.100 {} 1650cb4421d90c3f0e3af23e1f5e2c8176219ad8
2 20170321T215731.188+0100 INFO QlikServer1 Audit.Proxy.Proxy.SessionEstablishment.RedirectionHandler 14 17bed8a0-b6ec-4316-9dfd-ff062b4b1c79 DOMAIN\qvservice Authentication required, redirecting client@http://[::ffff:172.16.16.100]:62230/ to http://qlikserver1.domain.local:4248/windows_authentication/?targetId=6b07f3fb-5d9e-462f-b424-bb389054e645 0 45b4435a-3e05-47c4-b78b-0e5311154a91 ::ffff:172.16.16.100 {} 8da259cac80248a0b1258dffa22a0bc5585c668b
3 20170321T215731.444+0100 INFO QlikServer1 Audit.Proxy.Proxy.SessionEstablishment.Authentication.TicketValidator 14 ab854bb8-5c99-4a1c-8b76-00a443e94684 DOMAIN\qvservice Issued ticket 'ewXLiJDk6xM1Q0TH' for user, valid for 1 minute(s) 0 DOMAIN qvservice ewXLiJDk6xM1Q0TH 7f97ce8a8282eb9ed7aa0b61722b2504241e0e09
4 20170321T215731.470+0100 INFO QlikServer1 Audit.Proxy.Proxy.SessionEstablishment.Authentication.AuthenticationHandler 14 f27f9ebc-4ad4-4d5a-968d-13e910e1efab DOMAIN\qvservice User claimed ticket: 'ewXLiJDk6xM1Q0TH' 0 583a2b2e-5aeb-4292-9d02-d1c4e5a1cd0b DOMAIN qvservice ewXLiJDk6xM1Q0TH ::ffff:172.16.16.100 {} e025a74e13bb4c759fe5ad99787ad5589381b7bf
5 20170321T215731.891+0100 INFO QlikServer1 Audit.Proxy.Proxy.DefaultModules.Session.SessionClientHandler 8 8dfacde2-3963-432b-af75-22d5122b5fea DOMAIN\qvservice Syncing user attributes for user DOMAIN\qvservice was successful fe87b122-8979-45d4-9c37-3c39ce704045 583a2b2e-5aeb-4292-9d02-d1c4e5a1cd0b DOMAIN qvservice ewXLiJDk6xM1Q0TH ::ffff:172.16.16.100 {} df162a32bfed4ae4c37531b6f1ad4e7cb18b708e
6 20170321T215732.018+0100 INFO QlikServer1 Audit.Proxy.Proxy.SessionEstablishment.Authentication.AuthenticationHandler 14 711c200a-60e9-457a-be6c-89f1ce11bf82 DOMAIN\qvservice User 'qvservice' used 'ticket' authentication, got session: 'fe87b122-8979-45d4-9c37-3c39ce704045' fe87b122-8979-45d4-9c37-3c39ce704045 583a2b2e-5aeb-4292-9d02-d1c4e5a1cd0b DOMAIN qvservice ewXLiJDk6xM1Q0TH ::ffff:172.16.16.100 {} 7fa218f968005afa6c0e635058e1e6c290e90d76
7 20170321T215735.128+0100 INFO QlikServer1 Audit.Proxy.Proxy.SessionEstablishment.LoadBalancingHandlerDependencies 8 11c49eb7-4ce8-45be-98d1-76a10f34562c DOMAIN\qvservice Retrieved 1 engine(s) from repository, result code Ok, app: __hub fe87b122-8979-45d4-9c37-3c39ce704045 355bcda7-26f1-4a41-94d5-c5c88499abb8 DOMAIN qvservice ::ffff:172.16.16.100 qlikserver1.domain.local:4242 {} 7b0262f7f34c583b38c13639692a495540079d74
8 20170321T215735.144+0100 INFO QlikServer1 Audit.Proxy.Proxy.SessionEstablishment.LoadBalancingHandlerDependencies 8 29022811-cbe3-4f9d-ba19-b3bb50f37e0e DOMAIN\qvservice Cached 1 prioritised Engine(s), app: __hub fe87b122-8979-45d4-9c37-3c39ce704045 355bcda7-26f1-4a41-94d5-c5c88499abb8 DOMAIN qvservice ::ffff:172.16.16.100 qlikserver1.domain.local:4242 {} 5ff601a6ada967d7d6c9990a1bddbe25e7bbc1cf
9 20170321T215735.186+0100 INFO QlikServer1 Audit.Proxy.Proxy.SessionEstablishment.LoadBalancingHandlerDependencies 24 dbf6cd5b-735e-4112-94c3-4db366bbe4b2 DOMAIN\qvservice New target uri chosen: https://qlikserver1.domain.local:4900/ , app: __hub fe87b122-8979-45d4-9c37-3c39ce704045 355bcda7-26f1-4a41-94d5-c5c88499abb8 DOMAIN qvservice ::ffff:172.16.16.100 qlikserver1.domain.local:4900 {} ba0e44b68b214c47fea575e93244c3c5f159c281
10 20170321T215743.535+0100 INFO QlikServer1 Audit.Proxy.Proxy.SessionEstablishment.LoadBalancingHandlerDependencies 8 7bcf1a03-6cfe-45ea-a06b-d1a780528be0 DOMAIN\qvservice Retrieved 1 engine(s) from repository, result code Ok, app: c3ed4e49-dc98-4ff8-b140-f3d945236197 fe87b122-8979-45d4-9c37-3c39ce704045 79705676-f6f7-4396-8771-9f9f2a57f711 DOMAIN qvservice ::ffff:172.16.16.100 c3ed4e49-dc98-4ff8-b140-f3d945236197 {} fe793ff3d692d1992e38f7be35c3a15d93ada65d
11 20170321T215743.535+0100 INFO QlikServer1 Audit.Proxy.Proxy.SessionEstablishment.LoadBalancingHandlerDependencies 8 425b5b86-579e-410f-8721-a816841bc16a DOMAIN\qvservice Cached 1 prioritised Engine(s), app: c3ed4e49-dc98-4ff8-b140-f3d945236197 fe87b122-8979-45d4-9c37-3c39ce704045 79705676-f6f7-4396-8771-9f9f2a57f711 DOMAIN qvservice ::ffff:172.16.16.100 c3ed4e49-dc98-4ff8-b140-f3d945236197 {} ab51d2ad92cef8fff57fb15b1d5b115eeabedecb
12 20170321T215743.563+0100 INFO QlikServer1 Audit.Proxy.Proxy.SessionEstablishment.LoadBalancingHandlerDependencies 24 4c8e93c3-b3c9-475b-aa03-b8290f196014 DOMAIN\qvservice New target uri chosen: wss://qlikserver1.domain.local:4747/, app: c3ed4e49-dc98-4ff8-b140-f3d945236197 fe87b122-8979-45d4-9c37-3c39ce704045 79705676-f6f7-4396-8771-9f9f2a57f711 DOMAIN qvservice ::ffff:172.16.16.100 c3ed4e49-dc98-4ff8-b140-f3d945236197 qlikserver1.domain.local:4747 {} f27c34ac0b7976a062ef3ea21d129acf451b0e65
This log only tracks the initial Session from the first Open, since the session is already established for the User. Further opening of the same app will not trigger a new entry for this Session/App ID
Final Note: There were no tests and review of entries for any new user sessions, different App IDs or for when the session times out or is forcible closed (closing the browser).
What is legacy mode?
Legacy mode in Qlik Sense allows users who have access to the data load editor to write load statements which reference file paths rather than through LIB statements.
Simple example:
Standard Mode:
LOAD *
FROM [lib://data/data.csv]
(txt, codepage is 28591, embedded labels, delimiter is ',', msq);
Legacy Mode:
LOAD *
FROM C:\data\data.csv
(txt, codepage is 28591, embedded labels, delimiter is ',', msq);
What are the security considerations when using legacy mode?
Out of the box, for a user to do development in Qlik Sense, they need some combination of two types of access:
This security model is the intermediary between the user inside of Qlik Sense and the data sources.
When using legacy mode, any user who has access to the Data Load Editor can now access the file system of the local Qlik Sense server as well as any network file shares that the Qlik Sense service account independent of the security rules configured on the Qlik Sense deployment. In this example, the user has access to AttachedFiles, which all users would out of the box, as well as a Salesforce data connection, yet they are still able to load from the data.csv:
This means that the traditional mechanism for ensuring governance and security over data access inside of the Data Load Editor will no longer apply and the user(s) will be accessing the files as the Qlik Sense Service account
What are the benefits of using Legacy Mode?
This is difficult to answer in an exhaustive manner, but generally the benefits are narrowly scoped to deployments where app development, self-service capabilities are disabled or where this level of access is restricted to a vetted set of administrators. Some benefits include:
Is legacy mode recommended by Qlik?
For most deployments, no, legacy mode is not recommended. It can be useful in some deployment scenarios where the security concerns are mitigated.
How do I switch to Legacy mode / disable standard mode?
Disabling standard mode - Enabling Legacy Mode - Qlik Sense on Windows
Qlik Sense Legacy mode
This is a recording of a Support Techspert Thursdays session.
This session addresses:
- What the App Analyzer can do
- How to set it up
- How to improve app performance
- Troubleshooting issues
Chapters:
00:00 – Intro
01:26 - App Analyzer Overview
02:39 - When the App Analyzer is Useful
03:43 - SaaS Standard Tier Capabilities
06:23 - How Qlik Sense Stores Data
07:32 - App Analyzer Dashboard demo
08:58 - Example App Optimization
10:54 - Best Practice thresholds
12:02 - Threshold Analysis Sheet Demo
12:32 - App RAM to Quota Sheet Demo
13:19 - Example App
13:47 - App Analysis Sheet Demo
14:39 - Rolling Analysis Sheet Demo
16:28 - Reload Time
17:06 - Setting it up
18:11 - TIP: Relative Path
18:47 - Creating a REST Connection
20:34 - The right space to use
21:16 - Troubleshooting: SET ErrorMode
22:13 - TroubleShooting: API Key
23:29 - App Analyzer on Qlik Community
24:41 - Qlik Diagnostic Toolkit
25:41 - Qlik Sense Admin Playbook
26:13 - Q&A
App Analyzer on Qlik Community
STT - Optimizing Qlik Sense SaaS Apps with App Analyzer
Q&A:
Q: I noticed sheets supposedly in my app according to the App MetaData Analyser that are not part of my app. Is the presented data reliable?
A: Neither the App Analyzer for Qlik Sense Enterprise SaaS or the App Metadata Analyzer for Qlik Sense Enterprise on Windows track sheets – they only gather data model and reload metadata. That said, there was a bug in the App Metadata Analyzer for Qlik Sense Enterprise on Windows that did associate some fields/tables to the wrong apps that has since been addressed. You can always grab the latest version of the App Metadata Analyzer for Windows here.
Q: Hi, super interesting stuff! is the very same App Analyzer available to non-SaaS customers?
A: The application that the App Analyzer was modeled off of is the App Metadata Analyzer for Qlik Sense Enterprise on Windows that ships with the product. You can find the most recent version of that application here.
Q: The configuration of REST Connector is throwing error "cannot connect"...are there any setup needed on the tenant level?
A: Please refer to the installation guide for the App Analyzer and if you are still having issues, please post on the Qlik Community entry here.
Q: Can we setup Qlik Alerting to inform us as we get closer to threshholds?
A: Absolutely, yes. This is a great way to stay on top of monitoring your applications. In addition, you can also monitor one of the charts from the app in the Hub to have a quick look at where everything stands each time you login to the tenant.
Hello everyone and welcome to the first edition of Support Techspert Thursdays for 2021.
My name's Troy Raney and I’ll be your host for today's session. Today's presentation is: Optimizing Qlik Sense SaaS Apps with the App Analyzer.
Our presenter today is one of the architects behind the App Analyzer: Daniel Pilla. Dan please tell us a little bit about yourself and what we're going to be talking about today.
Yeah, absolutely. Thanks Troy. So again, my name is Dan Pilla. I’m a principal analytics platform architect; previously enterprise architect at Qlik. I’ve been here for roughly six years at this point and my expertise lies within the architecture frameworks, integration topics as well as our cloud strategy at Qlik.
The presentation today, what we want to go over is just a high level: What is the App Analyzer; kind of where it came from and why was it created. We'll walk through a high-level demo; kind of paging through each sheet to show some of its capabilities and highlight a couple of examples. Along the way we'll talk about how it's configured. I’ll actually show a short little demo of that. We'll highlight a couple of troubleshooting issues, though hopefully you shouldn't have to deal with many. And then lastly, we'll talk about a couple of things to remember; some resources and of course how and where to get the application itself.
Can you start by explaining for us, yeah, what the App Analyzer is?
Yeah, no, absolutely. So we'll go over a very quick high-level overview, but ultimately the application it's a direct port from the App Metadata Analyzer, per Qlik Sense Enterprise on Windows. This application is actually, or its predecessor is actually shipped with the Windows product. You can find it with the additional applications like the Reloads Monitor, the Sessions Monitor for the Windows platform. And given that the API endpoint was the same in our SaaS platform, it seems like an obvious first choice for a monitoring application to port over. Now the data present in the application is pretty much identical across both platforms, and you're able to get the app base RAM footprint. That's the RAM footprint of the app opening from disk, so without any users or session cache. That's actually a direct quota that we’ll talk about in just another slide. In the SaaS platform, we'll talk about the peak reload RAM. So the maximum amount of RAM the application takes to reload, and then there's a number of other metrics. Like the table and field RAM, the overall row counts of the tables, cardinality, and then the presence of data islands, and synthetic keys, along with circular references that can kind of give you some hints on the integrity of the application itself.
What are some example use cases for it?
When to use it, there’s obviously a myriad of different times that you might want to come in. But at least at a high level: number one, just tracking of app RAM size against tenant quota. We'll talk about the quotas on additional slide but this is going to be a key metric that you will need to monitor in SaaS. We’ll talk about anomaly detection and again it can be used for identifying potentially problematic applications. As well as, of course, optimizing. These tend to go hand-in-hand. Lastly then data modeling standardization and best practices. So kind of: holding all of your developers to a certain bar and making sure that you're preventing any potential problems in the future for your end users.
Dan, does this come pre-installed as a part of the SaaS tenant?
At this point in time, it does not come pre-installed. So we'll talk about the configuration but you will need to grab this application from Qlik community and import it into your tenant. The good news is: it's just an app; and it should only take about 10 minutes to configure.
Nice. So you're going to show us a demo now; and what tier of SaaS are you about to demo from?
Good point. So I want to bring up, just to add a little color in context because not everyone on the call might be aware of the you know what the tiers are and what the standard quotas are and so forth. So we’re going to be demoing from a tenant that just has the standard tier enabled. What that means is: out of the box everyone gets this. There's no additional charge for this whatsoever. It's kind of baked-into the standard license, is that the base RAM quota per application is five GB by default. And again that's excluding all users excluding all caching and so on. It's just, if you were to open the application in RAM; how much does it take? And that is one of the default quotas in the SaaS tenant. There's no horizontal limit so you can open as many applications as you want that are under five GB. You can have as many users on them as you want. No limit from a horizontal scaling perspective.
There is also a quota of peak RAM reload it's 3x what the base RAM uh quota is. So ultimately when you reload the application, 99% of the time you should be under this quota of 15 GB for a maximum reload RAM. However, you know if you’re doing a mountain of joins, or auto generates, or auto numbers; you know there is the potential to hit this, however it is quite unlikely. But the application will highlight this; will track it; so, there's no guesswork. The application will illustrate all of these quotas for you.
And what happens if someone tries to use an app that goes above these quotas? Like a base RAM quota of an app that's above that; will it even upload?
It will. It, well, it depends. I don't want to get too deep into that. But you could have an application that; let's say, two gig on disk, that you could upload. And it might break, you know in a week or two. As you start to reload it, right? You can upload one that's right before.
Yeah.
But there are recent improvements in the upload service. So it now should check the RAM on import; and actually, from a distribution perspective, be smart enough to say that ‘Hey, this application won't open in RAM.’ So I know that actively product management and R&D are working to make that a better and better user experience; to potentially, you know, suggest that you might need an additional tier. And there's more “smarts” going into that in the future.
Okay, cool.
This is just the standard tier. There are additional tiers. So if you did want to upload an application that's 10 gigabyte on disk, 20 gigabyte on disk. I can say that almost every single corporation I’ve worked with has at least one that's larger than, you know five gig in base RAM. We do have additional tiers that can serve up to 50 gig applications in RAM. So please do contact sales, if you are interested in that. There are additional paid offerings.
Can you quickly explain how Qlik stores data?
Yeah absolutely. I did want to pull this up, because I will be directly mentioning symbol tables and data tables. And if you haven't seen this slide, or if you're not really into Qlik data modeling; you might not know what that is. So keep in mind that if I were to pull in a simple table; let's say, just region and sales. That the way that Qlik actually stores this data under the hood is in bit stuffed pointers. So you will actually see the symbol tables and the data tables illuminated inside of the app metadata analyzer. So you can get a concept of how much memory does an individual field take as part of the data model. And then how much memory does an individual table take as part of the data model. So just kind of keep this in the back your mind as I go through the demo; as that this inherently is how we store data. And that that becomes pretty apparent when we actually look at the application itself.
It's great to see it visualized like this.
No more PowerPoint. Let's actually hop into the presentation here. Now I’ve already imported the App Analyzer. Note that there is a version here, because we are continuously releasing new versions. For now let’s just open it up, and we'll do a demo of the front ends before we walk through the back end.
So right now we're looking at the dashboard sheet. On the left hand side, you can see a number of the thresholds. The top two are the most important that you'll want to pay attention to throughout the application. That's the quotas for the standard tier. And what I’m doing is: I’m giving you an alert at the 80 % threshold of that quota. Note that is configurable. You could swap that to 60%, if you wanted to be more conservative in the load script. And it's saying that you've got four applications that have basically exceeded the threshold. That are at least nearing the quota. As well as peak reload RAM that we've actually got one that's exceeded that 80% mark of the quota.
You can use this along with other sheets to track, to see apps that are encroaching up to that quota. So you can make sure that, you know, they're rectified before they will no longer open within the tenant.
I love how it identifies potential issues. What else are we looking at here?
Across the top you can see the number of KPIs. This is just helpful from a pure inventory perspective, quite frankly, to see how many apps you have in your environment; how many tables across each individual app. I can select, you know, any arbitrary application. I can then see all of the tables; the total table RAM footprint; the number of fields; and there are many other additional thresholds that you can set in the load script.
For example, you know, I don't want applications to contain more than 150 fields or I don't want an app to contain more than 100 million records. Just from a sheer best-practices standpoint, all of those are adjustable.
Now I actually want to take this moment to call out an example here. So, I’ve clicked on a single application. Note that it’s pretty much completely capped. It's at our base RAM quota; from a peak reload RAM perspective, it's okay. But you know, at some point in time in the very near future, it's no longer going to open on this tenant. I can then see that I’ve got a line item table that's taking about a little over a gig in memory, and then I’ve got some comment fields it appears as if that take a very large amount of space. So one individual field is over one gigabyte in RAM. I could pretty much drop down to 80% if I just dropped that individual field.
Now in the Windows world this would just be considered, you know, optimization. In the SaaS world, this is going to keep your apps running at no additional charge; so it becomes even more important.
I can then take a look at another application here, that's my optimized version. If I select that app, you can now see that I’ve dropped all of the comment fields. I actually dropped eight comment fields. I dropped down from 5 GB to 1.7 GB, and that was just eight fields. And let's say that you know, you probe your users; you interview them to see ‘are these fields that you're actually using?’ ‘Are these critical to your analysis?’ You know, obviously before you drop them. But you might not have ever known that they had consumed that much memory. This makes it very very easy to do some pretty simple optimization.
That's a dramatic improvement. That's really impressive.
Yeah, and I can say that we have one customer example: they had an 18 GB app on disk. This is obviously in the Windows world. They used this application to drop, I think it was a total of 5 fields, and they optimized a few time stamps. They dropped it down to 10 GB in RAM. So they shaved 8 out of 18 GB off from just changing a few different fields. Especially once you start to hit high data volumes; little things like that can make a really big difference.
Now I want to clarify something. Those red lines that kind of make a dramatic impact; are those system thresholds? Or are those just sort of ‘best practices,’ ‘this is what we recommend’ thresholds?
Yeah. No, that's a great question. So these are arbitrary, you know, system administrator thresholds. They’re rough, kind of default lines in the sands, but you don't have to use them truly as guide points. In SaaS, roughly the 100 million record is actually probably a rough ballpark, quite frankly for the SaaS tenant. These are intended to say you know; as developer, I know what data sources my users are going against. I don’t want someone pulling in half a billion records. I don’t want them loading in 400 fields into an application with a select star. Or I don’t want them to have 100 tables in their application. They're more meant to be guide posts that an administrator can set up and then monitor you know, what developers aren't following their definition of best practices. So those are intended to be customized.
And does the number of users affect the RAM performance?
From this app's perspective, no. That's actually a really good distinguishing point. This is only looking at the data model, without any users. So, it's just pure application. This threshold of 5 GB excludes all user activity. It's simply for the data model.
Okay, can you walk us through the rest of the sheets?
Yeah, absolutely. So, if we go over to the threshold analysis sheet. So, let's assume that you've you know, selected an application. You can then see all of the details for the individual tables, the individual fields. You've got some additional thresholds that are set in here. Again, the number of field records, field cardinality, table number of records, as well as you know number of fields. I believe the setting here is 150. So, just more detail. Same general metrics that you would see on the dashboard sheet, just a different way of looking at it.
From the app RAM to quote up perspective, this is my favorite sheet from like a tenant inventory; to see where my apps stand today, and realistically you know, should I be considering additional capacity in my tenant, right? If I see that I’ve got a pretty heavy %age of apps that are at 60% + of my tenant, it might be time right to look into having additional capacity. Or maybe, should I be looking into something like On-Demand App Generation, or Dynamic Views to start thinning out my data models. Either or could be good routes. But really it's looking to see what %age of all of my applications sit within what %age of base RAM footprint of my quota, as well as peak reload RAM of my quota.
Now note, you might have you know some applications; for example, this one that has a pretty high peak RAM, but relatively low base RAM. This could be from using you know, monolithic QVDs that take a lot of RAM to buffer. It could be that you're using two dozen auto-numbers right in your in your script that takes just a lot of overhead. More than likely I’d say, 90% + of the time, this quote is going to trip before this one will.
If we go over to this sheet. This should look familiar to some users, especially from the Windows world. I’m focusing kind of heavily on ownership here, as well as a little bit on garbage collection. So, I do want to call out this visualization in the top right, where you can see: A) some apps that have never been reloaded on the tenant, and then other ones that haven't just been run in a really long time. So it might even be worthwhile; make a selection here, see if there's trends in ownership. See what spaces they belong to, and see you know, are all of these applications actually necessary? So it does have the capability to do a little bit of garbage collection within the tenant.
I’ll leave this to you on what you want to do with this. You could of course, go in and select ‘I want to see trends by users of you know synthetic keys,’ by data islands, by number of fields; and you could actually start to develop potentially users that might need training.
Now this is probably the most complex sheet, at least from an analysis standpoint; and what it's capable of doing. This has been running for quite a while, so I’m going to filter this to pretty recently. And I have some applications here that I wanted everyone to specifically take a look at. So, I’ve set up a few applications here that are reloading every night. And they're growing on at least what I intended to be a 1%, 2%, 5%, 50% growth rate; so you could see a few trends here. On the left-hand side, all of these buttons will change the actual metric that it's running. So, if I select this app that's growing at a rate of 50%, I can expand this and I can see that you know; daily I’m increasing at a rate of 50%. You could then anticipate potentially when this number is going to hit 5 GB, which would be within this week, more than likely. If I flip that metric to go ahead and look at peak RAM, you can note I’m actually really close. And because of this, I’m using auto-number; I’m using auto-generate within that script. You will actually hit the peak RAM threshold before you would the base RAM. But this becomes very simple to you know, come in and you could tie alerts to this. Because I don’t want to have downtime of this application.
Yeah, this is great! You can really see trends and anticipate problems before they happen.
Exactly! And if you go ahead and click on total rows too; now I can see where hey, if I stayed under that 100 million benchmark that my you know, sysadmin had set, likely wouldn't have to worry about either of those base RAM thresholds. But it can still be that nice guidepost that you can pretty confidently say that you'll be able to stay under a certain threshold. So you'll have to kind of play with that a little bit, and use this application to see how applications profile with your own data sets.
Lastly you can take a look at reload time as well; which is handy. This can be nice to see hey, is it potentially on the sheer data volumes from Qlik's perspective? Or is it potentially my database is taking longer? Or I’m using a rest API and that's taking longer? It'd be interesting to actually be able to track those trends from a reload time perspective as well. I hope that at least sheds a little bit of light on what this application does.
Yeah, it's an amazing amount of information on how the tenant is performing overall; analyzing specific apps and helping admins find ways to optimize and improve. Now, since it's not pre-installed, what's necessary to set it up?
Yeah. Let's go ahead and take a look at the data load editor. I promise it's not too daunting in this application. So this is you know, exactly what you'll see when you pull in the application from Qlik community, and we'll show that in just a moment. You're just going to see 4 variables. And I’ve tried to outline you know, there's a full walk-through guide, so you don't have to use the comments in the load script. But I did comment out everything, so you should actually be able to just follow it from here if you had to. Note at the top, it does require Tenant Admin. And you will have to have the Developer role, because you will need to use the REST Connector, which requires an API key. To get an API key, you have to be a developer. You have to be a Tenant Admin, because obviously you have to be able to see all of the applications, and we of course want you to be able to get everything in the tenant.
That makes sense.
The first is just as simple as possible. I actually forget what my URL is. I’m going to: ‘ea-hybrid-qcs-internal.’ Our region is ‘US.’ That can be one of a few different regions. Now this set REST Connection; I want you to notice one thing. First notice that there's just a ‘:’ so I’m actually sitting in the Monitoring Apps Dev space. So, if I wanted to fully qualify that, it would look something like this. The ‘:’ is for a relative path; so if I was moving this across spaces, and I had a data connection named ‘REST for App Analyzer’ across multiple spaces, I could move the app freely without having to worry about the actual qualified space name. So, that's more of a tip and a trick, quite frankly.
That's a great tip.
I’m going to name this ‘Delete Me,’ because I don’t want to muddy-up my tenant. But we're going to create one really quickly. So, we're going to create a new REST Connection. I can search for REST. Now the tenant URL; again, this is all in the guide, ‘qcs-internal.us.qlikcloud.com’ So, that's my tenant. And we can put in any arbitrary API endpoint. I’m going to use ‘/API/v1/items.’
Now below, you only have to fill in one query header; and that's ultimately for your API key. So, the query header is ‘authorization.’ And then you'll put in the word ‘bearer’ space and your token. I’m going to simply paste that from my clipboard. But if I go back over, I can see ‘bearer’ space and the token. Name this ‘Delete Me,’ because that's what I said it was going to be called. I’ll test my connection. Connection has succeeded. So, now you can see that connection, because I’m in the Monitoring Apps Dev space. This relative path will pick it right up. Below that, this application incrementally loads. So, you don't have to pull all of your applications every single time. It'll do it since the they've been last reloaded; which is the only time the metadata actually changes for the applications. I’m just going to put them right in the Data Files for this space. So, you can just do ‘:data files.’ Alternatively, you can store them in S3, Azure, Blob, Google Drive; wherever you store your data. The last one here, we'll talk about in in just a moment. But it's used for troubleshooting, so you can ignore that for now.
But let's go ahead and hit ‘Load.’
Does this take a long time to reload?
It does not. So, even on some fairly large tenants; we're talking about minutes. This already has the incremental QVDs that you can see sitting in Data Files, so it's only going to pick up you know, a few applications since it's been last reloaded. But I believe on this tenant, which has roughly 100 apps; it's roughly 30 seconds or so. But we're talking minutes.
In which space would you recommend this app be in? Should this be in a shared space?
Yeah. I always recommend that if an application is going to be published out that it begin in a shared space. It's always easier to start in a in a shared space. And then I typically, as my convention states, I’ll name something you know, ‘REST for App Analyzer,’ and I’ll have my Dev version, and then I’ll have a Prod version of that same connector that lives in my managed space. It's different from a Windows perspective. From you know, having a Dev tier and a Prod tier and a QA tier and a Test tier. In the world of SaaS, you can do that with spaces. So, I usually just qualify them in whatever “tier” they are, and I always use Shared spaces for that.
So, what would happen if an app that the App Analyzer is reading from suddenly gets deleted? Would that cause any problems?
Yeah, no. It's a great tee-up for this Error Mode. So, from a troubleshooting perspective, 100%. If you're in a very large active tenant, where even if you're doing you know, programmatic testing; that's where it's going to come up, especially within our own R&D tenant it comes up quite commonly there. You might see a ‘404’ or two. And the reason being is: once the application reloads, it does a GET of all applications across the tenant. And then it iterates over every metadata endpoint for every app. Now if one of those apps vanishes in the middle of the reload process, it's going to say ‘Hey! I can't find this application.” Which is a 404. So, once you've configured the app, it's generally considered just a good practice to set the Error Mode to 0 for the application, if you know it's functional of course, to kind of steamroll over those errors as they come up.
Just note that if this is set to Error Mode; and you're steam rolling over those errors. And at some point in time you know, your API key expires. It's going to steamroll over those errors too, and you're no longer going to be able to update your application appropriately. So, in conjunction with that, because this is the other place that the app could fail; when you create an API key, and you set your expiration date for that API key, put a reminder in Outlook or put an alert somewhere that will remind you let's say a week prior to update that API key. So, you don't have to worry about you know, that also causing an error you know 365 days from now right? Or 90 days from now, however long that you set that expiration for.
That's a great tip: to put it in the calendar; because otherwise, it just expires and causes problems.
Yep. Otherwise, it's going to throw you for a loop one year from today or however long you set it. Guaranteed. Because you won't remember.
Are there any resources that admins should be aware of?
I have a resources-slide here. I’ll pull up a couple of those pages that I talked about. I just want to hammer this point again. That it is available on community. I’ll go and visit that site in just a in just a moment. But again, it doesn't ship with the tenants; so I suggest that you subscribe to you know, to this post and follow it.
So, here's the page for the App Analyzer. You can see that it's actually been up since August of 2020. I believe there have been one or two version changes since. But you can find a demo very similar to the one that just gave on YouTube. You can also find a pretty exhaustive install guide for the app here as well. So, it's going to show you exactly how to set up and generate an API key; how to build out that REST Connection. A step-by-step guide to make sure that you'll be good-to-go for the application. So, you don't just have to go off the script notes. And then you'll see the actual you know. dot release version of the app. I just want to call one other thing out briefly. That the App Analyzer isn't officially Supported by Qlik, but we do have both Product Management, R&D, along with Pre-Sales (including myself) that are following that community entry. If you have issues there, please post them up; even just general feedback. We do look at it. We do address it. You know, we've got many customers that are already using this, and we know that it's going to be critical, so we will Support it you know, to the best of our abilities. But it isn't officially supported, and I am mandated to say that.
Can we take a look at those best practices sites you mentioned?
Yeah! So, we've got the Diagnostic Toolkit, which has been up for a couple of years now. And quite frankly, some of this is borrowed from the QlikView days, because we're working off the same engine here. What this does is: it gives you - don't have to fill out the top bit - it just exports a PDF if you wanted to. Again, like I said, some of our customers actually do require the output of this in their publish process.
It's these bullets below that can be really helpful as you're validating or you're building out a data model, and each one of these typically has links to relevant community articles. For example, this one actually links out to Rob Wunderlich’s utility. There's a whole bunch of articles that link out, you know, Henric Cronström’s old post. HICs if you're familiar, old posts. There's a lot of really good collated content here in one centralized place. And a bit of interface performance too. But the relevant bullets for the data model perspective are all in this top section here.
So, from the Admin Playbook perspective this is a recommended playbook and best practices guide for the Windows platform. There is a lot of overlap from an application perspective. But the reason I’m pulling this up is: we're currently investigating and exploring in 2021 the possibility of building something quite similar for the SaaS platform. You know, this took roughly a year of work to build out for the Windows platform. So, this is more of a ‘I’d keep your eyes peeled for something similar to this in the future from a SaaS perspective.’
Okay now it's time for some Q&A. Please place your questions in the Q&A panel on the left side of your on24 console. Dan, which question would you like to address first?
Yeah, let's go ahead and start with the first one that I see here. So, does the App Analyzer need to be purchased separately?
While it is separate, there is no fee. This is a free application that Qlik has provided to you. Again, while we don't have a strict Support policy for it per se, we will do our best to Support it over time, at least unofficially from a Pre-Sales perspective and a PM perspective. But it is something that we plan to make readily available for the foreseeable feature at no cost that you can import into your tenant and give it a run.
The next one here: Is there a version of this app for QlikView?
That's a really good question. So, one thing that I didn't call out during the presentation, and if you weren't already aware is: Qlik Sense Enterprise SaaS can host QlikView applications. And the App Analyzer will actually pick up those applications. So, if you're using let's say QlikView Publisher to distribute out QVWs, the App Analyzer will check whether those are you know, under 5 GB in RAM or not, and actually give you metrics about the model itself, which is unique to the SaaS platform, considering that it's you're able to analyze both types of applications. So, good question.
I’ve got one here that says you know: I’ve got some large applications in my Enterprise on Windows environments. What do I do with apps that are let's say 10 GB on disk?
So, yes. I did bring up that this presentation predominantly was focusing on the standard tier. We do have two additional tiers at this point in time: Expanded Capacity and Dedicated Capacity. Both of those allow for applications that are up to 50 GB in RAM. There is of course you know, an up charge for this, but do contact sales if you are interested.
We've got one that is: are there tips for optimizing peak reload RAM?
Yeah. I did mention at least one of them which was: you know, try not to use monolithic QVDs. Just the way that Qlik loads QVDs, they can they can take a lot of peak RAM to buffer in. So, it's why generally, we say you know, partition your QVDs. Be that by month; for example, potentially by dimension, because they will actually be easier to consume or less RAM intensive to consume. Also, just trying to avoid you know, massive nested Ifs, egregious usage of you know, auto-number or autogenerate, and things that are kind of notorious for taking a lot of peak RAM that you might be fine with eating in the Windows world. But from a SaaS perspective you know, it might behoove you to seek out alternative strategies there.
We've got one that's: will running the App Analyzer itself impact the performance of other apps on my tenant?
Good question. For this one, I mean generally speaking, no. Because it's reloading so rapidly. And we don't suggest that it reload you know, more than once a day, quite frankly. You know, it's, this isn't an app that you're gonna have reloading every 5 minutes. You're probably going to be checking it on at most a daily basis. So, I usually suggest running it you know, in your batch window overnight. Additionally, you know, because of the SaaS platform by nature; it's a microservice you know cloud-native offering. So, you're not going to hit the same bottlenecks that you might hit in the Windows world; so, even if you were running it you know every 5-10 minutes, I still don't think it would impact the performance of your tenant.
We've got one that is: what are the minimum privileges needed to do this?
So yeah. We did cover that that's going to be Tenant Admin and Developer (role). Wou will need Tenant Admin to of course access all of the assets across your tenant, as well as the Developer role so you can get an API key to actually interact with the Qlik APIs.
We've got another one that: is this the same as the app for Windows, the Sense System Performance Analyzer?
No. However, it does have a direct correlation in parallel with the App Metadata Analyzer for Windows, which is pretty much the same exact data model. That is another application that does again ship with the Windows product. You've got to manually import it into your Qlik Sense Enterprise and Windows site, but it is pretty much identical to this application. Just note that this one is obviously, you know, all of the piping has been redone to work with the Enterprise SaaS APIs. Instead ‘how to get the CPU hit’ and ‘RAM expansion on open,’ in SaaS you won't get the CUP, but you will get the RAM. For the App Analyzer, that's the key metric that that application exposes. And again, that's the key quota that is for the standard tier. That 5 GB is that app open event that's actually a good way of, it's actually the exact event that we throttle with REST Connections where data limits are in place.
… pagination…gotcha. So, basically, the question is: in the SaaS APIs, there are, there's limits on in the amount of data that's returned so you do have to paginate.
And yes, if you actually take a look at the App Analyzers load script; on all of my REST calls to get the apps, to get the spaces, to get the users, all of those are set up to paginate by default. So, if there are, let's say, more than 100 apps in your tenant, that will be a minimum of two API calls to fetch all of those; and yes, you can you can use those example scripts to use in your own applications, if you see fit.
Is there an application like a License Monitor for Qlik Sense client managed?
Great question. And we get asked this a lot. This App Analyzer is the first monitoring application for SaaS, because it is quite frankly, the simplest to port, because of the fact that it's built off of the same JSON structure, the same application metadata that's available on both platforms. It was pretty close to plug and play in our SaaS tenant. That said, we completely understand that you know, user monitoring, adoption, I want to see what applications users are using, what sheets are they navigating to, I want to be able to do things like track expensive objects, absolutely. That is coming. In fact, the latter is coming in the February release, to be able to actually look at performance of individual objects in the application itself from a user perspective.
We are absolutely, product management and R&D are researching the best way to expose those metrics, but we do recognize that you know, that is of course not available in this application. But just to be frank, that's not the intent of this application.
Okay Dan, we have time for one last question.
Okay gotcha. Let's take, this is a good one: so moving from the Windows platform to SaaS, how do we know if it will fit in standard tier?
Really good question. So, we have developed a tool internally. We call them the SaaS Readiness Applications that pre-sales services, a number of different organizations across Qlik can help run on your site. They are just simply; it's a QVF that we can hook up to your site that will do a profile of all of the applications for you, and it will actually, you know, what respective tiers each application would fit in. So please do reach out to your account rep, and we can work with you to run that to make sure that you'll be you'll be as prepared as possible to migrate over to SaaS.
Note on the Readiness app mentioned:
The SaaS Readiness apps are available on the Partner Portal for download. That said, this was built as an internal PreSales tool that was then exposed to Partners. It requires interpretation. There have not been any major enhancements made to this app in a while.
A more complete toolset is now available publicly that incorporates much of the SaaS Readiness app (it was built on top of it) and that can be found in the Migration Center.
Great! Thank you very much, Dan. I think this can be really useful for people.
Absolutely! Thank you, Troy for having me.
Thank you everyone. We hope you enjoyed this session. Thank you especially to Dan for presenting. We appreciate getting experts like Dan to share with us. Here's our legal disclaimer, and thank you once again. Have a great rest of your day.
Currently, there is no available feature to delete (or update) existing tags in Qlik Sense SaaS.
Qlik has previously received feedback on this feature. For up to date information and to express your interest in this feature, review https://community.qlik.com/t5/Suggest-an-Idea/Enable-Deletion-Update-of-Tags-in-Qlik-Sense-SaaS/idi-p/1831354
When accessing the QlikSense Hub with an iOS device (iPad/iPhone) Apps are not loaded or cannot be opened. HTTPS access is used.
The user tries to open the Qlik Sense Hub on an iOS device (iOS 8 and above) using the address https://servername/hub
When attempting to open an App they are presented with an endless loading screen or "Connection to the Qlik Sense engine failed for unspecified reasons"
A review of the QlikSense Proxy logs (Folder Trace\ and SERVER_Audit_Proxy) might show SslNoClientValidation WARN entries.
Environment:
Qlik Sense Mobile on iOS
Apple has removed the ability of users to ignore certificate warnings on devices iOS 8 and above. On laptops or desktops you would have the option to ignore the SSL warning, but this option is not possible with iOS devices.
1) Since the "Connection Lost" message or endless "bubble" loading screen are typically indicative of an allowed url/host problem, be sure that the Domain that the user is accessing the Hub has been allowed through firewalls and proxies. See article Error Message "Connection lost" When Connecting To Qlik Sense Hub for more clarity on this. If users on laptops / desktops do not receive this message then proceed to step 2.
2) Access https://sense-demo.qlik.com/hub using the iOS device. If this works then proceed to step 3.
3) Apple requires an SSL cert from a trusted CA when accessing websites via HTTPS (check their knowledge base for a list of providers). You can either acquire a certificate from one of the vetted list of providers or the user(s) can access the Hub using HTTP. For guidance on applying a third party certificate, see article How to change certificate for the Proxy.
4) If no certificate from a trusted CA can be acquired, it will be necessary to install the current one. See Apple and it's Apple Configurator for details.
The Qlik Sense Mobile app allows you to securely connect to your Qlik Sense Enterprise deployment from your supported mobile device. This is the process of configuring Qlik Sense to function with the mobile app on iPad / iPhone.
This article applies to the Qlik Sense Mobile app used with Qlik Sense Enterprise on Windows. For information regarding the Qlik Cloud Mobile app, see Setting up Qlik Sense Mobile SaaS.
Content:
See the requirements for your mobile app version on the official Qlik Online Help > Planning your Qlik Sense Enterprise deployment > System requirements for Qlik Sense Enterprise > Qlik Sense Mobile app
Out of the box, Qlik Sense is installed with HTTPS enabled on the hub and HTTP disabled. Due to iOS specific certificate requirements, a signed and trusted certificate is required when connecting from an iOS device. If using HTTPS, make sure to use a certificate issued by an Apple-approved Certification Authority.
Also check Qlik Sense Mobile on iOS: cannot open apps on the HUB for issues related to Qlik Sense Mobile on iOS and certificates.
For testing purposes, it is possible to enable port 80.
If not already done, add an address to the White List:
An authentication link is required for the Qlik Sense Mobile App.
NOTE: In the client authentication link host URI, you may need to remove the "/" from the end of the URL, such as http://10.76.193.52/ would be http://10.76.193.52
Users connecting to Qlik Sense Enterprise need a valid license available. See the Qlik Sense Online Help for more information on how to assign available access types.
Qlik Sense Enterprise on Windows > Administer Qlik Sense Enterprise on Windows > Managing a Qlik Sense Enterprise on Windows site > Managing QMC resource > Managing licenses