Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Using Qlikview 10 SR1 we are having a major issue when concurrent users are accessing a shared qlikview document, which has approximately 120 million rows of data behind it.
The document features a table that is very wide and has multiple fomulas in it, and the issue is really apparent when each user if trying to access this table with different selection critieria that all return a significant percentage of the data.
When doing this one of the selections will be processed whilst the other one has to wait for the first one to finish. This chaining of selections results in the waiting selection taking over 10 minutes to run even if that selection is cached (and in single use will take seconds to run after being cached)
Anyone else experience similar problems and what can we do to resolve it aside from reducing the dataset, which is not really an option and we are really struggling to understand how qlikview can handle signifcant amounts of data on anything aside from a Kray super computer - the server we are using has 96GB RAM and 4 dual core processors, which is proving insufficent for qlikview to handle this amount of data in a multple user environment (and when we say multiple user we me any more than 1 - the issue is apparent with just 2 users let alone 20 plus concurrent users, which is what we will have when fully live)
Thank you
Hi, I know this is a very difficult matter.
You said you can reduce the dataset. You mean it globally? Could you create an initial data reduction based on section access or you have already discarded this option?
Do all your users need to see all the details? Maybe you could remove some field to some users using OMIT (on section access too)
Maybe some users analyse the data on a more high level. In this case, I would develop another app where the information is pre-aggregated in the script (even a more analytical user could use this second app when they need a faster and more superficial analysis).
I'd like to know your consideration about this.
Rgds,
Have you reported this issue to QT support? It sounds like a significant issue that should be looked at.
I agree with Rob, and when you do submit the question, please send a copy of the document so that someone can dig into the table.
There's always a lot of things to be done from a document optimization point of view, and a very wide table with many formulas sounds like a good area to start looking at. Without looking at the actual document, there's not much anyone can say about it.
This has been raised with Qliktech support and we hope to get an update today so will keep you posted. It would seem though for us - and so far we can find no other user who experiences this issue - that no matter what report / data if a progress bar is required to display the data then it is queued. In other words with large datasets Qlikview carries out the processign sequentially and what I was really trying to establish was if anyone else has experienced this and if so what steps they took to resolve it
Hi,
did you disable CPU Hyperthreading ? It sounds odd , but it should be OFF for best QlikView performance.
-Alex
www.snowflakejoins.com
We do already have hyper-threading switched off but as it turns out Qliktech have now cnfirmed the software is working as designed - whilst a large data request is running (the definition of large has not been confirmed) no other large or medium request can be processed - they are queued! (small data requests can still be processed)
I am surprise that no one else has encountered this issue
Can you post the text of the support response here? (If it says anything more than you've already said).
The response from Qliktech was as follows:
QlikView categories objects that it needs to calculate in Small, Medium and Large objects. This is determined by how much memory is needed to calculate it, and once the categorisation is done, Large Objects will be handled with priority and no more Medium or other Large Objects will calculate at the same time. Small Objects will still be available and run alongside the Large Objects.
Hi,
Did qliktech say how the size categorization of the objects is calculated? It would be good to have a clue about which sizes to avoid. ie What is a medium size and large sized object for a server with x amount of RAM.
Johan