Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Qlik and ServiceNow Partner to Bring Trusted Enterprise Context into AI-Powered Workflows. Learn More!

STT - Optimizing Qlik Cloud App Performance

cancel
Showing results for 
Search instead for 
Did you mean: 
Troy_Raney
Digital Support
Digital Support

STT - Optimizing Qlik Cloud App Performance

Last Update:

Mar 13, 2026 3:55:15 AM

Updated By:

Troy_Raney

Created date:

Mar 13, 2026 3:55:15 AM

 

Environment

  • Qlik Cloud

Transcript


Hello and welcome to the March edition of Techspert Talks. I'm Troy Raney and I'll be your host for today's session. Today's Techspert Talk session is Optimizing Qlik Cloud App Performance with our own Eamonn Harrington. Eamonn, why don't you tell us a little bit about yourself?
Yeah, hey Troy. Happy to be here. As Troy said, my name is Eamonn Harrington. I am a Principal Analytics Pre-Sales Architect here at Qlik. I've been here for about three and a half years and then prior to that I was a Qlik customer for about 5 years. I had a great experience obviously and then (you know) made the transition over to working for Qlik.
That's awesome. That's so cool. Like you've been working with Qlik so long and now you're actually working for Qlik. I love it. Today we're going to be talking about App Analyzer, some other tools to take a look at app performance. Eamonn's going to take us through a demo of how to actually use those tools and improve an app's performance and we'll be looking at some best practices along the way. When we start talking about performance optimization. Eamonn, how do we start that conversation? Because I know it can be really relative for people, right?
Yeah, absolutely. Performance optimization is a broad enough term that it can mean sort of whatever it needs to mean in the moment. In my experience as a customer, often it was what are my users telling me when they're using my applications. You might have people saying the app is slow, filtering takes too long. Those are usually cues that you need to think about optimizing your application. Now, that could also mean I need to optimize the back end of the app, right? It needs to reload faster, the data model needs to be smaller, that sort of thing. As you would expect, these two concepts are very much working hand-in-hand with each other. But typically the drive to start thinking about and start doing application optimization comes from the front end.
Yeah, absolutely. Users and considering we should probably narrow our scope to Qlik Cloud Analytics. Do customers need to be conscious of backend performance?
It's a great point. One of the great things about Qlik Cloud is that we have horizontal scaling.
And what is horizontal scaling in terms of apps?
Yeah, we will spin up as many engines of a particular size that you need. And that current workload might be reloading applications. It might be serving those applications up to users. Anything that's happening right now. In contrast to vertical scaling, which would be we make a single server as big as it needs to be. So you definitely do want to pay some attention to back-end performance because these things go hand in hand. But unlike a on-premise environment, you don't necessarily need to be too worried about what's going to happen with this application if it becomes successful. Right? Sometimes we were hindered by our own success where I've got a big application and now too many people are using it and that's why I need to optimize. That's less of an issue on the cloud.
Great. So one advantage of the cloud, the more users cloud will scale to meet that demand automatically. But if we've got an app where we want to improve the performance, what's the next step?
Yeah, great question. Just like anything in life, probably the next step is let's measure it and let's see exactly how slow it is. So in Qlik Cloud, we can do that through the built-in application Performance Evaluation functionality.
Do you want to take a look at that now?
For sure. I'm now in my environment and I've got a couple apps in front of me. What I want to draw your attention towards is the ellipses here on an application. And under tools, we've got this Performance Evaluation button. If I click into it here, I've got a bit of a history of different runs.
If you wanted to perform an evaluation, you just hit the Evaluate Now? Is that how that works?
Exactly.
And I see we've had errors on all those.
Exactly. This application is actually performing so slowly for this to even totally finish. But the good news is that we do get the results up to the point where things failed. So under Info, how large the application is, how many rows are in the data model, how many sheets, but importantly under Results. Okay, someone was telling me that this was slow to open. Let's actually look at the sheets by load time.
Okay, so it was able to get through some evaluation before it failed.
Absolutely. Yeah, it got through enough to let us know that this is a big issue, right? 45 seconds for a single sheet is…
That's an eternity.
Yeah. And better than just the results of the sheet, we can drill in and see which are the individual objects that are really driving this load time. The whole sheet didn't take 45 seconds to open. Really, this object took 45 seconds.
Whoa.
This is even more granular so that we know exactly where we should be looking.
That's awesome. So that really helps you drill in and find the problem areas where you can continue to troubleshoot what's going on there.
Exactly.
All right. So this app that you've done the evaluation on, what is this app and what is it supposed to do? We got User Tips. Can you tell us quickly like high level what kind of app this is?
Yeah, absolutely. Probably always useful to set some context, right? This application takes business review information. You can think of places like Yelp or Google Reviews and aggregates it all together. So this is just publicly available review information. How is it rated? Who rated it? That sort of thing.
All right. So that Performance Evaluation tool that's built into Cloud told us a lot already. After you run that evaluation, identify some areas that beg for some more attention. What would you do next?
I would jump into the App Analyzer. The interface is all measured by the application Performance Evaluation which we just saw. But then a lot of the performance issues could also come from the data load side of the house. If I've got a data model that's just really out of control, that's going to slow everything down. So the next tool I want to look at before we open up the app itself is, as you said, this App Analyzer.
And where can you get the App Analyzer? Where does that come from?
Yeah, good question. The App Analyzer is a monitoring app that you install in your tenant. Okay.
The good news is that it's extremely easy to install. It actually has an Automation template that just deploys the whole suite of applications for you. So, if I come into an Automation and I search for a “monitor,” I can find the Qlik Cloud monitoring apps deployer. If you want some more information, there's a community article here. There are several other applications that monitor all sorts of different telemetry information about your tenant. Probably a lot of you on the call are already familiar with these, but if you're not, you would say: number one piece of information, go get the monitoring apps.
No, this is great. And it gives you a whole suite of monitoring tools and it's good to run this occasionally because I know we just came out with an update to App Analyzer tool that takes a look at deprecated objects as well. So, it's good idea to run this. Can we take a look at the App Analyzer? See what it can tell us?
Yeah, absolutely. Here's the App Analyzer. We've got several sheets here. I'm going to be focusing on this Metadata Analysis sheet. There's a lot of other information in here as well, including the Deprecated Charts improvement. Even this stuff like, hey, who's using what application, how long are they using it for, all that sort of stuff. If we jump into the Metadata sheet, I've already got this filtered down on the application that we're looking at.
What sort of things stand out to you here? What should be looking for?
Really, I want to be looking for things like, are there individual fields that are taking up a lot of memory? Like, are there any outliers that just immediately stand out to us? So, if we hover over these top two items in our Field Memory, our number one field is Date, which is always a bit of a red flag. So, what that usually means is that you've got a date with a time stamp that you may or may not need.
Okay, let me just make sure I understand what I'm looking at in that chart. It's showing you the amount of memory that each field in the app is using and the top two there are using more than most and the top one is actually just a date field?
Exactly. And the tone of that end of the sentence is correct. Yeah, it's certainly there would be occasions where this would make sense. Maybe the analysis I'm doing is down to the millisecond, but even then, what I would want to do is break it out to two fields. What you would want to check is that your date isn't a timestamped date plus time field because that's going to be the least efficient for the engine.
Right. And this is an app that's looking at User Reviews. So the exact second that a review came in maybe isn't that valuable.
Exactly. Exactly. Exactly. Yeah. Which contrasts to the number two field which is the Reviewer Tip Text which is the actual review. This is: the I went, the salad was great. Five out of five. It's not a situation where we're just going to chop everything down, but definitely that Date field is a red flag we should be looking at.
Okay. And just to understand some of the KPIs listed here at the top, we've got app RAM footprint. Could you explain what that means?
Yeah. So, the RAM footprint is the total “size of the application in RAM.” The larger that number is, the more taxing it is on the engine. Of course, sometimes you just need to analyze a lot of data and it is what it is. We can make some improvements to bring that number down and …
Okay.
As that number decreases the performance is going to increase.
That makes sense. All right. So it's helped us identify a field we should take a look at and also with that Performance Evaluation we've seen a sheet and more specifically an object we should take a closer look at.
Yep.
So is it time to jump into the app?
I think it is. This Reviews and Tips…
Was the sheet that took 45 plus seconds to…
Exactly. Yeah. So, let's just attack this head on. Now, this might be a little faster than 45 seconds because it was cached already. Although, it's not that much faster,

I'm assuming. But it still is that that one object in the middle you can tell is the one that was dragging it all down.
Yeah, exactly. Let's make a copy of this sheet here and take a look at. Yeah, this is using a deprecated object.
On that note, I love how it says in the little warning there, a tip on how to update it. Please drag and drop a tab container over it to convert. I mean,
Yeah, it is really nice.
It's not just saying you have a problem, it also gives you a solution to the problem. I love that.
Right. Exactly. Yeah. Couple things to start investigating. We've got this table here.
Yeah, that was one of the slower ones.
We can take a look at, (you know), what are the contents of this table, right? A lot of different fields, but (you know), nothing looks too terrible. Actually, this Tips Measure is something that we'll want to address.
I have a question about this because my background is in Qlik Support and I just know from experience that IF statements tend to be kind of red flags. Would you agree with that? Are IF statements a problem when they're kind of in a nested expression like this?
They definitely can be. I think there's two things happening here. One, you've got some expressions that are less than ideal, right? And you're right, like an IF statement. To me, an IF statement is always an example of something that you could have done this on the back end.
Ah, right.
And actually, there's quite a few IF statements in here, so that's not good. Combined with there are so many records in this table that a small inefficiency becomes a big inefficiency. I'm showing every single user. I've got a ton of data in this and even their User Yelping Since field is timestamped which is doesn't matter that this person joined at 2:19 p.m. That's kind of a
And 33 seconds.
Right. Right. Right.
So there's a lot of improvements to be made just by moving some of these front-end things to the script. Is that what you're saying?
Yeah. So it's a two-pronged approach. First thing would be let's add in a Calculation Condition so that we don't have to load the whole table for this sheet to finish loading.
Right. So like every record in it so you don't have to load every record.
Exactly. (you know) you have to select something in the data model to then get this User Details table to load. We can do that easily under Add-Ons going into data handling adding in a Calculation Condition. Silly example. If I said the Calculation Condition is 1 equals 0. Well that's never true. And then this doesn't load. This table was taking 45 seconds to load. Calculation Condition means that it doesn't load at all initially. So we'll put in a Calculation Condition that makes sense.
Okay.
And save us some time there. But then to your point, absolutely. There are things that we can do in the data model like clean up the Date field, migrate some of those IF statements into the load script to make it a little bit more efficient.
All right.
I will point out quick just one more example. This KPI object took a while to load.
Most Tips. So that's counting how many reviews this one user has provided. Is that what that is?
Basically, yeah, it's a count up an item and then say: who's in first place.
Okay.
And if we look at the actual expression here, it's kind of a lot of stuff going on that is taxing for sure.
It’s applying something to text, doing an Aggregation, a Count and on an IF statement. There's a lot going on there.
There is. It's an example of something that when the data model is small, no big deal. But when the data model starts to get large, small problems become big problems.
It's good to have a review like this, I think, on your apps and go through and see what could be improved over time. So you had improved version of this app or you've already gone through a lot of these steps, right?
I do. Yeah. So let's go back out here. The Performance Evaluation is ultimately your verdict. Did the changes we made actually do anything? We can see a couple things. Number one, it finished. Right.
Yeah. Right. It doesn't have the big red error mark on it anymore.
Exactly. If we actually look at the results here, so we'll go back to that Public Sheets by Initial Load Time. We can see that the Reviews and Tips sheet went from almost 45 seconds down to just under 12. And things like that table are now 5.4 seconds instead of 45 seconds. So definite improvements.
That's great. I love being able to see this because you have an actual measurement to go by, solid facts that yes, now it is faster.
Exactly. And to your previous point about it being good practice to “prune your garden,” so to speak, you can also be proactive and do a Performance Evaluation and get a good benchmark and maybe start making improvements before people tell you that you need to.
That's next level there.
There you go.
Okay. So, yeah, let's take a look at the app and see what changes you made.
If we start in the load script, the first thing you'll notice is there are Counter fields. These are useful because it's much easier, of course, to just count a series of 1s than to do a Count Distinct on an ID field or something like that. That's a good small efficiency. You'll also notice that there are fields commented out here. If you don't need fields, you can always just comment them out, save the memory. It's very easy to come back in and uncomment it if you ever needed it. This is our biggest table in the model. Changes here. First and foremost, we got rid of the time stamp on the Date field. So, we floored the date. Floor is going to convert this to a numeric value. And the Date is just going to convert it back to a Date field. That's going to save us huge amounts of field memory that we saw from the App Analyzer.
All right. So, that converted that big long timestamp date to just a plain old day, month, year date?
Exactly. Similarly, a lot of those IF statements that we saw are now moved to the load script itself. Just like we saw up at the top, we had a Business Counter field. We now have a Review Counter, Tip Counter, Check-in Counter. This is a great way to take this stress of the front end, move it into the back end.
Yeah. And we saw it made some pretty dramatic improvements with the 45 seconds going down to 5.
Exactly.
That's great.
Yep. And then thing on the User Create Date there. And then a couple final items here. So Autonumbering ID field is very useful. Basically this is going to take an ID field which might be a long alpha numeric field and just Autonumber it. So the first value that appears is 1. The next value that appears is 2. What you're gaining there is the efficiency of having smaller values in memory and then retaining the field relationships.
Right. So it provides the same function but with a much smaller data set?
Exactly.
That's smart.
And then on the front end.
All right. So, like you said, it's got that message now: “Please make a selection.”
Exactly. Yeah. So, we're looking for a user to make a selection here. If I wanted to only look at reviews that got five stars, I can make a selection. And now, (you know), it's still going to take a second or two to load, but it's a lot faster because it's pulling in so much, (you know), fewer data.
Nice. But everything, the whole sheet loaded a lot quicker. And
Absolutely.
The Date field I saw was much smaller. Those IF statements have been pulled out. It's just performing better. That's really cool.
It's a lot more efficient. It's a lot faster. And then to just kind of show you exactly what did we do. Remember talking about this, I put 1 = 0, which is of course not good. This is a much better statement here. Right? Basically, get the Current Selections and then as long as it's something display the table.
That's a great expression. So, as long as I made a selection, any selection, it will populate, but otherwise don't. Great.
Exactly. And then we saw this KPI up here, which was a little rough. We made it into a master measure, which is actually a really good thing to do, but let's show how we changed it. Before we had switch a value into text, the IF statement in there, we got rid of those and performance is much better.
Yeah, all that calculation moved to the back end. So, it's already done when this sheet loads.
Exactly.
Okay. How does this look from the App Analyzer's perspective now?
Yeah, good question. That's kind of the final piece. And let me actually maximize this.
Sure.
So, we can see that the Review Text that's still the same, but our Date field is now zero. So, it's obviously something, but it's but
Dramatically.
Yeah.
It went from being the largest field in this app to the fourth smallest almost.
Exactly. Yeah. So, big improvement there. And if we look at the overall Base RAM, it used to be around 2.5 and now it's 1.3. Big improvement.
Yeah, that's almost half. That's huge. Wow. So those small changes relative the rest of the development of the app have really improved performance, lowered the RAM value, and just made everything faster. That's incredible. And these two tools, the Performance Evaluation & App Analyzer, helped you identify where those changes could be made. Is there anything we need to know about the App Analyzer that it's important for people to be aware of?
I would just reiterate what you said before. It's always a good idea to check the Automation and make sure that there isn't a new version out there. There's a lot of great information beyond the Field Memory. And then one thing that we didn't talk about is the cardinality of a field. How many distinct values are in a field? The more distinct values there are, the less that field can be indexed down and made smaller in memory. Sometimes like Review ID, the fact that there's almost 10 million values, (you know), sometimes it is what it is. If I go back to the old app, the fact that Date had 26 million distinct values, once again, it's just another red flag. Look for things that don't make sense.
Yeah. When you first said that the biggest field was a date field, that really like what? That doesn't seem to make sense at all. So.
Exactly.
Seeing those outliers and understanding why those are there. Like Review Text, of course, that's the meat of this app, analyzing those reviews, but Date shouldn't be that way.
Yeah, it's a good way to check and see if there's some improvement that can be made. And then, of course, yeah, when we look at the improved version, it's a totally different and better picture.
Awesome. Now, it's time for Q&A. Please submit your questions through the Q&A tool on the left-hand side of your ON24 console. Few questions have already come in so I'll just read them from the top. First question: how many reload tasks can Qlik Cloud run concurrently? So at the same time.
Yeah, it's a good question. So that is actually a function of your license. Qlik Cloud Analytics comes in a couple different tiers.
Right. So it depends what tier you're in?
Right. Right. Yeah. So my answer is: it depends. No. The highest-level tiers are: 30 reloads concurrently and then the starter tier is: 5 concurrent reloads.
And functionality wise, if in that biggest tier trigger 50 reloads all at the same time for some reason, they still just queue up and wait, right? Like they don't fail automatically.
Yeah, great point. Yeah, things will just start queuing up and then as slots become available, the reloads will then trigger.
Great. Next question: For complex applications, we use entity attributes value modeling as our Qlik data models. (So this is very specific) because it performs better and we make use of Set Analysis. Is this kind of modeling still appropriate for Qlik Answers and MCP server use with LLM vendors? Okay, so it's off topic, but do you know the answer to that?
Yeah, so it's a really good question. So entity attribute value modeling is where your data model doesn't have a column for each dimension. The dimension is specified in a column. So you could think of I've got a table where in column one I might say color, size, quantity, then column two is the actual values for those things. So you can have a huge amount of dimensional values in a pretty narrow table.
Mhm.
The answer to the question about Qlik Answers and MCP, it's going to hinge upon how much semantic work has been done. So the more master measurements you've made, the better Qlik Answers is going to work. All else being equal, a more classic in quotes data model where each dimension has a column will probably work better without more alterations. But I think if you put some work into the semantic layer, you should be able to get your models to work very well.
Great. Next question: If an app size on prem, so Qlik Sense Enterprise on Windows is 1.7 GB, why could it grow to 3.6 GB when it's in the cloud?
Yeah, it's a good question and I'm going to attach a big caveat here, which is it really depends on the specifics of what this question means. So I might encourage you to reach out to your account team. But if for example you're talking about how Capacity works in the cloud. When you upload an application, the capacity that's consumed is the base RAM size of that application. But then when you start to reload that application from the cloud, the capacity consumed is the data and the files that you use. So it very much depends on some of the nuances of that question. So I'd encourage you to reach out to your account team.
Very good answer. Yeah, your account manager will be able to help you more specifically. Next question: Is making calculations in the script versus in the chart expression a preferred best practice to improve performance?
Great question. Yes, I'm sure there are exceptions that prove every rule, but like we saw in our exercise, taking those IF statements out of the front end and moving them into the load script was very very beneficial for front- end performance. Generally speaking, it is almost always better to do calculations on the back end.
Great. Next question: I like your recommendation of requiring a selection before loading a large table. I would like to improve the opening time for an app. Is there still a way in Qlik Cloud to preload the app like in the on-prem version or any other ideas to increase opening speed?
Yeah, so in the on-prem version, I believe we called this cache warming where you would have a pre-cached version of the application. I don't believe that there is a way to do that in the cloud version. The one thing I can say and this is a change that came out fairly recently. If and only if, so caveat there at the top, you have Large App Capacity in your tenant, there is now a way to assign a larger than typical engine permanently to an application. So previously the way it would work is if you have large app capacity, a larger engine will get assigned when you need it, when you cross that threshold. Now, even if the base RAM size itself doesn't necessitate that I have that larger engine, I can assign that larger engine because I'm eligible for it and I want the better performance that it gives me.
Great. Next question: How much of a performance impact is having a very spread out data model, tables linked to tables, link to tables, etc. versus one main link table?
This is the age-old question, right? Do you normalize or don't you? I think the answer is it's marginal. It's funny. I started this by talking about how I used to be a Qlik customer. Now I work for Qlik. I was always told the most efficient data model for front-end performance would be a single table. I've also heard people say that that's not the case. So my honest advice would be it's going to be a marginal difference one way or another. I would encourage you to do it in a way that is going to lend itself to you coming back and making improvements over time.
That makes sense. Make it work for you.
Exactly.
That's good to know that it's marginal difference. That's cool. Next question: In Qlik Cloud, can apps performance be adversely affected if large numbers of users are opening the app at the same time, all with their view limited by section access? or would it perhaps be better to have a reduced app with specific data to each user?
Yeah, good question. That to me comes back to the horizontal scaling. So, one of the great things about Qlik Cloud is the fact that you don't have to be afraid of an app being successful. So, my answer to this would be it should not be adversely affected by large numbers of users using the app at the same time with or without section access. Because of that horizontal scaling, we will deploy more engines as needed when the usage increases. Now, you can use things like loop and reduce to do what this question is sort of implying like I'm going to spin up many versions of an application with subsets of data, but my personal preference is to always use section access and a single application.
Okay, that's nice advantage of having apps like that on Qlik Cloud then, it doesn't affect the performance. Great. Next question: Where can I find the documentation on best practices like these?
Yeah, always a good question. On our site, we have an Optimizing App Performance section of our Help site. Ton of great information. It's a lot of what we covered here. If you're interested in app optimization in particular, definitely give this page a look.
Great. Thank you. And last question we have time for today: What are some app development best practices for performance?
Yeah, mentioned some, definitely didn't mention all of them. Some of these we talked about, but things like Autonumbering fields, what we did to the Date field, avoiding Select * statements, don't bring in all the data if you really don't need to. You can see, by the way, on the model side, star schema is fine. Using QVDs is something that we didn't talk about at all, but is actually very worthwhile because QVDs are read-optimized. So the actual load speed is going to be increased when you use QVDs. And then on the front-end, a lot of this are things that we talked about, but offloading calculations to load script where you can is probably your biggest improvement. And then looking for expressions that are little out of control, right? We had the one where we were using AGGR statements with IFs. Those sorts of things have their place, but definitely best to avoid them if you can.
Those are great tips and I'm sure they'll be really helpful for people and it's great seeing it demonstrated that you had that complicated lots of calculations going on in one and you move it to the script and suddenly everything improves. That was great. Well, Eamonn, thank you so much for this. I really appreciate the time going through and actually seeing all these ideas and practices applied to an app and seeing the dramatic improvement that it can make on just a simple app. That was huge. I'm sure this help people in the future and learning about what tools are available. So, thank you so much.
Yeah, absolutely. Thanks for having me and especially thank you for everyone who attended. Hopefully, it was worth your time. Thank you for the questions. I would certainly encourage everyone take a look at the Performance Evaluation. Take a look at the App Analyzer and the monitoring apps as a whole. And certainly, if you've got a need to optimize your apps, the tools are in front of you. And happy hunting, so to speak.
Thank you everyone. We hope you enjoyed this session. And thank you so much to Eamonn for presenting. We always appreciate having experts like Eamonn to share with us. Here's our legal disclaimer. And thank you once again. Have a great rest of your day.

Contributors
Version history
Last update:
‎2026-03-13 03:55 AM
Updated by: