Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
This package (referred to as Qlik Sense Scalability Tools) contains a complete set of tools for easy creation, execution and analysis of load/performance tests.
This tool is now deprecated and will not receive any further updates, please use the Qlik Sense Enterprise Scalability Tools instead.
Supported versions of Qlik Sense: all 2020, all 2021, 2022-aug
Included parts are:
QlikView and Qlik Sense documents to help analyze result and log files (previously included in this package) can be found here :https://community.qlik.com/docs/DOC-15451
Troubleshooting
For help to troubleshoot connection problems, please review Appendix A of the documentation or Connection Troubleshooting Tips
Change log
v5.17.0
v5.16.0
v5.15.0
v5.14.0
(See Readme.txt for changes in earlier versions of the tool.)
Your use of Qlik Sense Scalability Tool will be subject to the same license agreement between you and Qlik for your Qlik Sense License. Qlik does not provide maintenance and support services for the Qlik Sense Scalability Tool, however please check QlikCommunity for additional information on use of these products.
Thank you very much.
Good tool. I'll take it in use.
I need to determine how much CPU is needed for 1000 concurrent users. How do I properly perform load testing and determine enough CPU?
We have trial licenses Qlik Analytics Platform for several variations of cores CPU.
We also have already developed applications.
Using Scalability Tools, we indicate 1000 users. Defined application testing scenarios. However, there is a suspicion that during subsequent user connections, the previous ones simply hold the connection, and the new user runs the script. It turns out that the simultaneous execution of the script by all users does not occur.
How long is your scenario and what is your Iterations parameter set to? The number of specified users will be kicked off at a pace determined by the RampupDelay, and each execute the specified number of iterations of the scenario. This means that if the scenario is sufficiently short and iterations low, it may be the case that the first user is done with the scenario before the last user even enters the simulation. Upping the iterations value allows you to prolong the scenario by looping it.
Thank you very much.
We apply your recommendations.
Is there any plan to support TED certified extensions like Vizlib filter and table objects?
Hi,
No there's no current plan to add official support for such extensions other than the semi-support already existing, i.e. we do a "GetLayout" for each extension object.
For some extensions the Qlik Sense Enterprise Scalability Tools might help, as it supports adding your own request flow for any extension provided it's of a simpler form (i.e. no sessionobjects, no navigation and no advanced logic like auto adding selections etc). If this is enough for your use-case you can read about it in the section "Supporting extensions and overriding defaults".
Hi there,
I have the 5.2.0 version of the tool as well as 5.5.1.
We use Qlik Sense April 2018 version in a two node environment.
Would there be any benefit in using 5.5.1 version of the tool compared to 5.2.0?
It might sound like a silly question but I have used 5.2.0 for this version before in our environment so wondering whether to stick to this version, or use the new one.
@mwallman wrote:
Hi there,
I have the 5.2.0 version of the tool as well as 5.5.1.
We use Qlik Sense April 2018 version in a two node environment.
Would there be any benefit in using 5.5.1 version of the tool compared to 5.2.0?
It might sound like a silly question but I have used 5.2.0 for this version before in our environment so wondering whether to stick to this version, or use the new one.
If you are using Sense April 2018 there's no strict reason for you to move to 5.5.1. The 5.5.1 version still includes support for April 2018 though. You will be missing out on some bugfixes mostly and somewhat more correct handling of boxplot autocharts (Chart suggestion), but as autocharts are not fully supported and the recommendtion is to turn these off for proper test result that shouldn't make a difference.
Hi all,
I am looking to do 350 concurrent user testing on 3 dashboards.
App sizes:
Is it possible to run multiple tests at once using the tool where all of these apps are being consumed at once by a total of 350 concurrent users?
E.g. I want to split the test so there are 150 concurrent users on the 3.5GB app, 100 users on 500MB app, and 100 users on 800MB app concurrently all at the same time.
They are different dashboards so there are different scripts for it.
How much should I set my ramp-up delay to for something like this?
And Iterations value?
And ExecutionTime?
Any advice on how best to set up a test like this?