Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 
greg-anderson
Luminary Alumni
Luminary Alumni

Have you encountered QlikView publishing or access issues due to the QVW file size? Do you have an upper limit you try to keep files at or below?

I deal with QlikView developers who are freely creating solutions with years of detailed history, resulting in QVW files sizes between 3 and 12 GB, in some cases.

I have cautioned developers and management about controlling their file size, because I have seen larger files experience more errors in publishing (network time outs) and rendering to end users.

Based on performance monitoring, our servers should be able to manage the load.  Still, we experience publishing issues and very slow response times via the Access Point.

I have no control over the network between the Access Point and the end users, but I am in discussions with our Networking teams about possible remedies.

Do you have any guidelines you try to enforce on file sizes for UI QVWs? 

Thanks!    

1 Solution

Accepted Solutions
Miguel_Angel_Baeyens

The bigger the QVW is, the longer it will take to load, distribute through the network, open from AccessPoint, and prone to all errors like network hiccups or file locks. While once open, and the data model is in memory, if RAM and CPUs are powerful for your concurrency, all those issues will not happen. However a file of that size will likely have a big data model, or several disconnected data models and any click will take longer than expected.

First thing that comes to my mind is whether that app can be loop and reduced by sets of users or functionality, for example, you have users from several countries, so you loop and reduce by the field Country and create smaller versions with only that country data set. Or by function, e.g.: Finance, HR, Sales, etc. Or by month.

View solution in original post

4 Replies
Miguel_Angel_Baeyens

The bigger the QVW is, the longer it will take to load, distribute through the network, open from AccessPoint, and prone to all errors like network hiccups or file locks. While once open, and the data model is in memory, if RAM and CPUs are powerful for your concurrency, all those issues will not happen. However a file of that size will likely have a big data model, or several disconnected data models and any click will take longer than expected.

First thing that comes to my mind is whether that app can be loop and reduced by sets of users or functionality, for example, you have users from several countries, so you loop and reduce by the field Country and create smaller versions with only that country data set. Or by function, e.g.: Finance, HR, Sales, etc. Or by month.

rwunderlich
Partner Ambassador/MVP
Partner Ambassador/MVP

I agree with Miguel's comment about pitfalls. Sometimes an easy way to reduce file sizes is to drop fields that are not being used in the data model.  you can identify unused fields using Document Analyzer. Document Analyzer can also help reduce file sizes by helping you to optimize the storage of used fields.

Qlikview Cookbook: QV Document Analyzer http://qlikviewcookbook.com/recipes/download-info/document-analyzer/

greg-anderson
Luminary Alumni
Luminary Alumni
Author

Thank you, gentlemen. 

Miguel, your initial points are my core argument.  Larger files are more prone to network interruptions, and of course they take longer to process and load.  If the company wanted to through massive hardware and dedicated network connections at me, I might complain less.  But that still wouldn't help the Access Point considerations, since we have offices around the world and hundreds of users connected remotely via VPN.

And my response, "of course a 12GB file is going to load slowly" is not taken well, especially when we're processing 36-48 solutions per day (not counting loop-and-reduce or other cuts as separate).  The publishing server is okay with it, although it takes time just to load and save such a file, not to mention the processing.

Our hardware can handle the files.  I do see a lot of errors in the distribution process due to network time outs on larger files, and I'm working with our networking teams in an attempt to alleviate that situation.  This is usually resolved by running the task again.  Similarly, end users sometime experience timeouts when trying to access the solutions online.

I do insist on full data model review for every QlikView application we publish.  Which isn't to say that some people don't get around that requirement for "high-priority" projects, depending on the stakeholders.

I still perform the review, but it's often after the fact.  I often restructure data models and transformation processes, which can result in surprising reductions in file sizes.

We also discuss options for "loop and reduce" and other options for reduce the distribution size, and we implement document chaining to isolate some of the more complex or data-intensive areas of the applications. 

While we have very skilled and dedicated QlikView developers, I have to believe we could do better. 

Rob, we do use the document analyzer and always have a list of 'DROP' commands at the end of the UI script.  Thank you for the link, though.  I will certainly review the updated cookbook.

Again, thank you for the quick replies.  At least I can feel that I'm approaching the situation correctly (at least to an extent) and not missing anything obvious.

Miguel_Angel_Baeyens

In addition to Rob's Document Analyzer, which is a must, you can also leverage on the Scalability Center Tools to benchmark and simulate concurrency and analyze complexity of any application that you think it should go under review (because of size, response times, which particular sheet of the application is heavier, etc.).

It does not benchmark network traffic or disk speed itself, for that you will need other tools like Silk Performer or Load Runner, but you can see differences on network behavior if the JMeter instance is in the same computer as the QVS when the tests is run (optimal scenario: no network traffic, direct access to memory) is or if you place it somewhere else in the network, eventually in a simulated user computer (through proxies, load balancers and other network elements which can cause these hiccups).

It does work well to verify to which extent loop and reduce relieves some opening time issues or response times, or section access, or document chaining (i.e.: more maintenance from the application perspective versus better user experience).

Also, if reloads to those documents don't happen often, you can use those tools to cache warm the document making the most frequent selections programmatically, so when the users open the application some results have been already cached.

Last but not least, make sure that PGO files are stored into a superfast disk, Shared and Meta files are not huge and regularly backed up, and ask Qlik Support for settings, if there are any, which you could apply for large applications (off the top of my head, do not use PgoAsXMLAlso, check whether it is worth -or even applicable- to set DisableNewRowApplicator, PGOOfflineTimeOutSec, MaxSharedObjectCacheSize, MaxSharedObjectSizeToCache, etc.) on the QlikView server settings.ini file.

None of these solve the issue of better development or refactoring existing applications but can help with some technical stakeholders to justify the need to spend some time to do it.