Gary Strader wrote:
Version 0.5 added two features we needed - textobject interactions and native support for Basic Authentication including setup instructions. Thank you!
Request - please add release notes for each new version showing what's new or changed from the previous version.
Thank you for your suggestion, a brief changelog has been included in the intermediate 0.5.4531 release, located in the root folder. It will also be included in future releases.
I have an idea for a feature enhancement. Random generation of script actions. The user sets some high level parameters, such as total number of actions, and the tool automatically creates a set of actions based on the document XML. It would have to make sense - it would only select from objects in the currently selected tab, for example. But I think the logic is already there for that.
This might be useful for:
- Randomized testing
- Multiple user simulation (users rarely follow same click path)
- Cache avoidance
- Quick and dirty load simulation when the action sequence doesn't matter
- Automated testing change management - no need to update action sets manually when source app changes
Thanks for the input. In fact, we have been discussing similar features previously within the team.
It's a good idea and if this is valuable then we will keep this in mind for future development.
The ideas keep coming.
I would like test users to create, modify, and delete collaboration objects. I realize this is probably a lot more complicated because it's a run time versus design time configuration. Thus it can't rely upon the -prj source XML. Is this feasible?
I've been wanting to do regression testing on QlikView applications for years, honestly. I am going to start using this immediately and see whether it meets my needs.
REALLY nice that you put together a lengthy regression testing PDF.
I am out of the office until 25/03/2013.
Note: This is an automated response to your message "[QlikView
Scalability] - Tool for easy creation of load/performance tests of QlikView
(v.10 and 11)" sent on 19/03/2013 21:08:16.
This is the only notification you will receive while this person is away.
In QVScalabilityTools.pdf it is mentioned as known limitation that "One script simulates requests to one QlikView document, except when reduced document functionality is used. Running a test against multiple different documents will require multiple scripts, where each script needs a separate JMeter instance".
Would this mean testing 100 simultaneous users accessing different documents need 100 JMeter instances to be run?
Is that a limitation of QVScalability tool? or a limitation of JMeter?
Does JMeter support scalability testing of Qlikview server for simultaneous users accessing different documents using one JMeter instance, without using reduced document functionality?
The limitation is mainly for QVScalability tool. In the current implementation, each scenario/script is for one document only (except the case of reduced documents).
Note that there is a difference to users and scripts/scenarios, 100 users running a scenario require only one instance. 100 users divided between 10 documents (and different scenario for each) will require 10 instances.
It is possible to run multiple scenarios from one instance of jmeter, but it will require manual tweaking of some settings in order for it to work properly without the scenarios "interfering" with each other.
In general, it is adviced to run multiple instances of jmeter if load towards multiple documents is to be simulated. Reasons for this is that it is simpler and a safer bet it will work (due to shared settings mentioned before). Also there is no real downside running multiple instances, in fact this is recomended if a large amount of users is to be simulated.
Thank you for the quick reply and clarifications! I appreciate it.
1. I have one question based on your statement "100 users running a scenario require only one instance"- for example if we need to run the same one scenario that can be valid on 100 different documents, would one instance of the tool be OK to run simultaneously on 100 different documents?
2. Where can I find the information regarding the setting to tweak if I need to run multiple scenarios from one instance of the jmeter with out the scenarios interfering with each other.
3. I understand it is good to have multiple instances of jmeter run for using multiple documents and for larger amounts of users, but just wanted to check as it is mentioned in QVScalabilityTools.pdf that each instance of JMeter consumes 3GB RAM, using 100 different documents need 300GB of RAM.