Skip to main content
Announcements
Introducing Qlik Answers: A plug-and-play, Generative AI powered RAG solution. READ ALL ABOUT IT!
cancel
Showing results for 
Search instead for 
Did you mean: 
Daniel_Larsson
Employee
Employee

Qlik Sense Scalability Tools

This package (referred to as Qlik Sense Scalability Tools) contains a complete set of tools for easy creation, execution and analysis of load/performance tests.

This tool is now deprecated and will not receive any further updates, please use the Qlik Sense Enterprise Scalability Tools instead.

Supported versions of Qlik Sense: all 2020, all 2021, 2022-aug

 

Included parts are:

  • Standalone application for creating and executing a simulation script
  • Documentation on how to use the package
  • Regression analyzer
  • Benchmarking package
  • App evaluator package


QlikView and Qlik Sense documents to help analyze result and log files (previously included in this package) can be found here :https://community.qlik.com/docs/DOC-15451

 

Troubleshooting

For help to troubleshoot connection problems, please review Appendix A of the documentation or Connection Troubleshooting Tips

 

Change log

v5.17.0

  • Add support for Qlik Sense May 2022 release
  • Add support for Qlik Sense Aug 2022 release

v5.16.0

  • Add support for Qlik Sense Feb 2022 release

v5.15.0

  •  Add support for Qlik Sense Nov 2021 release

v5.14.0

  • Add support for Qlik Sense Aug 2021 release

(See Readme.txt for changes in earlier versions of the tool.)

 

Your use of Qlik Sense Scalability Tool will be subject to the same license agreement between you and Qlik for your Qlik Sense License. Qlik does not provide maintenance and support services for the Qlik Sense Scalability Tool, however please check QlikCommunity for additional information on use of these products.

Labels (2)
199 Replies
Levi_Turner
Employee
Employee

@mwallman : You can use the Test Scheduler functionality to execute multiple tests. In that config you can set any amount of off-sets that you desire:

test_scheduler.pngAs far as ramp-ups, etc. The most obvious approach is to build out your test plans then do the maths to figure out the values. To take a trivial example, you would want your Worker Settings --> Rampup Delay to be lower than the total time it takes for the first user to execute the test. Otherwise you would be unable to get concurrency.

mwallman
Creator III
Creator III

Hi @Levi_Turner 

Thanks for the information.

Does below look OK as a set-up?

I always get confused about how long the ramp-up delay should be. Currently I set it to 15 seconds.

My execution time is based on 250 users X 15 seconds rampup delays = 3750 seconds, and I rounded it to 4,000 as an extra cover.11Capture.PNG

 

 

 

 

 

 

 

The test takes around 3mins to complete, it's a typical test of user opening a dashboard sheet, thinking, making selections, thinking more, clearing selections, go to next sheet, select some more, etc. The last step is think time after making one or two selections.

jblomqvist
Specialist
Specialist

Hi all,

Nice tool.

I am new to this tool so hoping someone knowledgable can help with the questions below. They are mainly on understanding the results and setting up of tests.

1)

In the results folder I am trying to understand the data.

What does the ResponseTime column represent? Does it mean response time of all the objects in the sheet after an action on the sheet?

I.e. After a select action, the the responsetime column represents how long the sheet on which the action was executed takes to fully load everything on the sheet?

2)

If you are writing a test based on ScenarioWorker settings, does each session represent a scenario being carried out by a worker?

So for example if I have a scenario with 30 steps to interact with a app in various ways, a session would be completing this scenario.

What determines how many sessions will be carried out? I think it will be the ExecutionTime value but not sure.

3)

I suppose the limitation of Execution time is maybe some of the sessions will not fully finish the scenario? I.e. Some sessions will end without completing all the steps?

Is there a way to format the execution so that you could have 100 concurrent users and the sessions are completed by every user?

Daniel_Larsson
Employee
Employee
Author


1)

In the results folder I am trying to understand the data.

What does the ResponseTime column represent? Does it mean response time of all the objects in the sheet after an action on the sheet?

I.e. After a select action, the the responsetime column represents how long the sheet on which the action was executed takes to fully load everything on the sheet?

The response time is the time in milliseconds between the first request sent and the last response received within an action. Which exact requests depends on the action. For a change sheet action it would be from getting the sheet object, getting all objects and sub-objects on the sheet, getting layout and data for all objects on the sheet. For a select it would be from sending the select request until updating all subscribed object data etc.

2)

If you are writing a test based on ScenarioWorker settings, does each session represent a scenario being carried out by a worker?

So for example if I have a scenario with 30 steps to interact with a app in various ways, a session would be completing this scenario.

What determines how many sessions will be carried out? I think it will be the ExecutionTime value but not sure.

Not entirely sure what you are asking for here, but the complete scenario will be carried out by each simulated user (unless stopped by execution time or error) with different randomization.

The amount of users simulated is defined by a combination of iterations and concurrentusers, where so with 4 concurrent users and 2 iterations you would simlated 8 users. There's also a flag called NewUserForEachIteration if this flag is set to false (default is true) you would simulate 4 users doing the scenario twice each.

3)

I suppose the limitation of Execution time is maybe some of the sessions will not fully finish the scenario? I.e. Some sessions will end without completing all the steps?

Is there a way to format the execution so that you could have 100 concurrent users and the sessions are completed by every user?


The test stops on when the first one of the iterations or executiontime is reached. I.e. if you want to make sure that all users finishes the entire scenario you can set executiontime to -1 (infinite) or, as a fail safe, much higher than what executing the entire script should take, then it will not stop until all concurrent user threads finished all their iterations. Setting both executiontime and iterations to -1 will give you an infinite execution, i.e. not stop until you stop it manually. So in your case e.g. if you want to simlate a total of 100 users, set concurrentusers to 20, iterations to 5 and executiontime to -1.

Concurrentusers here mean the amount of users having an actually active connection at any given time. So if by concurrent you are thinking concurrent over an hour or so, then that's not the same thing, e.g. 120 users over an hour, with a scenario length of 10 minutes, would be 20 concurrentusers and 6 iterations.

jblomqvist
Specialist
Specialist

Hi @Daniel_Larsson , useful info!

How does the below set-up look to you? Is there anything you would change?

My goal is to test having 300 concurrent users using particular dashboard for the test. This is to answer if 300 unique users can open and interact with the dashboards concurrently.

I have prepared a scenario with 52 steps, that take about 11 minutes to complete.

The actions for the scenario are quite simple, open the dashboard, make selections, think, clearall, go to next sheet, select, think, etc. The main actions are select, clear all, and change sheet. Timerdelays is added after select or change sheet actions. No other major actions.

Does below look OK based on my info above?

 

SetUpTest.PNG

I have created this from seeing other screenshots and info on this thread. Not sure if I have built it correctly

Daniel_Larsson
Employee
Employee
Author

As long as the server is dimensioned to be expected to handle 300 concurrent users á 11 min (~1636 users/h) the settings look fine. Some points though:

* With iterations -1 and executiontime 6000 not all users will completely finish their scenarios ( as per your question before)
* I would make use of the "AfterIterationWait" also, if not used, when an user gets an error it will instantly try to re-connect, setting the afteriterationwait to add a delay either on error or keeping the or minimum session length to your 11 minutes would avoid the risk of "spamming" reconnects in case of many errors when you have this many users.

jblomqvist
Specialist
Specialist

Hi @Daniel_Larsson 

Thanks for the info.

1) Instead of iterations -1, what should I set it to make sure all users complete their scenarios?

2) What does AfterIterationWait do? Does it mean interation of the scenario? What kind of errors might a user get?

3) How seriously should you take the ResponseTime value from the results? I am noticing that in sheets where there are lots of IF statement based calculations or sheets with big straight tables the response time is long.

Before using the tool, testing manually, some of these sheets with straight tables take some time to load as a stand alone user so when 100 or so users are pummling this sheet, the CPU really seems to go up and is constantly around 90%.

If you have any feedback around this I appreciate it.

Daniel_Larsson
Employee
Employee
Author

Hi,

Sorry for the late answer.


1) Instead of iterations -1, what should I set it to make sure all users complete their scenarios?

You would need to set a finite number of iterations, i.e. of your scenario usually takes 10min for a user, running 5 iterations for 1 user should take 50 min then you can set executiontime to either -1 or something longer than you expect it to take, e.g. 2 hours as a failsafe. When running concurrent users you would have to add the rampup x (concurrentusers-1) to this. i.e. with 300 concurrent users and 7s ramup, 299x7= 2093s ~ 35min, adding this to 50min would give and expected scenario length of 85min. With a set number of iterations and no executiontime forcibly stopping it all error-free users would finish their scenarios.

2) What does AfterIterationWait do? Does it mean interation of the scenario? What kind of errors might a user get?


It inserts a wait time inbetween iterations, and yes one iteration is one run-through of the scenario for one user.

3) How seriously should you take the ResponseTime value from the results? I am noticing that in sheets where there are lots of IF statement based calculations or sheets with big straight tables the response time is long.

Before using the tool, testing manually, some of these sheets with straight tables take some time to load as a stand alone user so when 100 or so users are pummling this sheet, the CPU really seems to go up and is constantly around 90%.


This is the minimum time it will take for a user (plus rendering time etc), if there is a lot of calculations needed to be done and these calculations takes a long time any subsequent user would need to wait until the calculations are finished and cached. To try to optimize the calculations done in the objects you can 1.restart the engine  (to clear cache) 2. run a 1 user test 3. make changes to calculations 4. restart engine 5. run the 1 user test again 6. compare results.

If the calculations are only heavy the first time for the first user many use pre-caching scripts which runs after a reload. When testing you could run a one user test then run the full test after that.

korsikov
Partner - Specialist III
Partner - Specialist III

Hi to all.

what about Cyrylic Characters in sheet and object names for "ComplexityIndexWorker"  job. 

All names resolved like ??????????

Patrick1210
Partner - Contributor II
Partner - Contributor II

Hi all,

I am working on setting up the Scalability tool on a Windows Environment, with a multi-node solution (2 nodes for 2 QS Engine, 1 QS Repository Database, 1 QS Scheduler & QS Central node).
I am quite new with this approach and is looking for best practices.
I haven't found all the answers to my questions on the web, hence some questions in this topic.

Thanks in advance for your help.

- Would  you recommend to install the Scalability tool on one of the Engine node, where it will be closer to an user simulation ?In this case, should the virtual proxy still be customised ? As the scalability tool is on the Engine node I tend to say no, but is it the case ?

- In order to use the pre-caching option, should a admin service account be created & dedicated to the pre-caching option, in order to get a better audit on who accesses the apps ?

- Is there any option to create fail-over solution in case the tools breaks down ?

- within this caching option, how can I be sure the jobs have run ? Is there any log ensuring this ? as well as performance records between 2 opening of dashboards ?