Skip to main content
Announcements
Introducing Qlik Answers: A plug-and-play, Generative AI powered RAG solution. READ ALL ABOUT IT!
cancel
Showing results for 
Search instead for 
Did you mean: 
Not applicable

Performance issues: your advices

Hello,

I am building a qv application which deals with invoices. My fact table is having around 45 millions of records (for 36months). Number of records will only grow smoothly, as I have been told to expose 36months maximum. My problem is not on the loading steps, as i have followed a multi-tier architecture with several qvw files to handle incremental qvd. I have also avoided synthetic keys, and kept my data model as a star schema. I am not far from the recommendations mentioned in the following topic http://community.qlik.com/message/111448#111448

My problem is more on the frontend, with a single user connected to the application, I find that navigation and selection are not that fast.

Details of the server:

Windows 2003 R2 / Entreprise x64 Edition / Service Pack 2

CPU: Intel Xeon E5320 @ 1.86GHz with 8GB of RAM

On the task manager, I can see 8 CPU running.

My QVW Frontend is 1.9GB. It takes few minutes for 1 end-user to open the document through IE6 with QV plugin; and for every user action, all 8 CPU run at 100% till data are returned to the client. Definitely, when concurrent users will start using the application, it will worsen the performance even more.

I wanted to aggregate the data even more, to reduce drastically the number of records. An easy approach would have been to remove the invoiceID, and group by all other dimensions. Problem is that functional team want to keep this dimension... We are migrating from OLAP technologies, where they used to have the feature 'Drill Through' to access the very detailed data. So, invoiceID would not be used for pivoting, but more for identifying a specific record.

So, would it be possible that I build 2 qlikview frontends:

- the first one, faster, with the very aggregated data (without the invoicedID)

- the second one, slower, with the detailled data (including the invoiceID)

End-user will mainly use the first frontend, but when they want to access the detail (drill through), it will direct them to another qlikview file (while keeping the selection made on the 1st file). The 2nd qlikview frontend will have only a single table box.

Is this architecture possible? Feel free to share your advices.

Is the capacity of the server too low for my requirements? Later, I will need to deploy the same application for other subsidiaries, which will definitely increase the number of users and the workload on the server.

Thanks in advance,

Nicolas

19 Replies
danielrozental
Master II
Master II

You're application (1.9 GB) seems too big for 45 million rows. Get the statistics file from the document properties and post it here.

Not applicable
Author

Hi there, as a general suggestion you may consider to fix your model to a star complaint schema, with a single transaction in the middle and masterdata around. Also, mark the preload option on every application on the server, so the user does not wait that long the first time he opens the application. The other thing that is happening is that you are not taking advantage of the hardware architectures, the processor(intel xeon e5320) still has front bus instead of the new "qpi links" techonology, also the maximum speed of the ram memory supported by the processor is 667mhz, instead of the newer architectures that support 1333mhz and four channel capable memory. Furthermore, the e prefix in the processor means that it is a energy saving variant, it is recommended to use intel x or w prefixed processor.

You also want to make sure the the "energy options" in the control panel is set to high performance. Qliktech also fixed a performance issue with expressions using set analysis since qv 9 sr7 release, so make sure you have at least that version installed.

Regards

johnw
Champion III
Champion III

Assuming there's no data model error and your charts are reasonably-coded for performance, the "right" solution seems to be to get better hardware.

That may, of course, not be practical.  I think the architecture you're considering should be possible.  Keep track of a record count when you aggregate, and then only open up the other document if the record count is small enough.  I've never chained from one document to another on anything other than a test basis, so I'm not sure how exactly you pass in all of the filters and such, but I assume it's doable.

I had a similar but less severe issue with one of my applications.  What I did with mine was recognize that 90% of the user activity was occurring on only the most recent data.  In my case, only the year to date information was really critical, even though I keep 5 years of data.  So I use QlikView Publisher to select the current year, reduce the data to match, and create a separate document for that.  Now, by end of year, the YTD document will be pretty slow, but still nowhere near as slow as the 5 year document.  Perhaps your users spend 90% of their time interacting with only a small subset of your data.  If so, perhaps the same approach would serve their needs.  Most of their work could be done on the smaller, faster document.  Only when they really need to dig into some old (or otherwise uncommon) data would they need to bring up the monster document.

Not applicable
Author

Hi Ivan,

Thank you for all the explanatins regarding the server & ram details.

You also want to make sure the the "energy options" in the control panel is set to high performance. Qliktech also fixed a performance issue with expressions using set analysis since qv 9 sr7 release, so make sure you have at least that version installed.

Do you mean: Control Panel > Power Option > Power Schemes = Always On?

Not applicable
Author

Daniel Rozental wrote:

You're application (1.9 GB) seems too big for 45 million rows. Get the statistics file from the document properties and post it here.

Hi Daniel,

I have loaded the *.mem file in the QlikView Optimizer 8.5.qvw, that I found online. Here it is... Let me know your advices

Thanks

Not applicable
Author

Thanks John,

Always appreciate reading your answers. I am also wondering how to pass all filters applied from 1 document to another, I'll see...

Regarding the other solution (Spliting the document via Publisher), I will talk to my functional team about it!

Not applicable
Author

Hi,

The simple and quickest solution i can suggest based on your problem description is:

First of all in the default page that opens up when user accesses the QV application, keep only one chart maximized keep remaining all minimized. And also keep the maximized chart with Summary details and take our invoice id and others, provide another chart or table with detailed field information like invoice no, order date, line number etc.

Please find the attached sample that may help you.

Not applicable
Author

n.allano escribió:

Hi Ivan,

Thank you for all the explanatins regarding the server & ram details.

You also want to make sure the the "energy options" in the control panel is set to high performance. Qliktech also fixed a performance issue with expressions using set analysis since qv 9 sr7 release, so make sure you have at least that version installed.

Do you mean: Control Panel > Power Option > Power Schemes = Always On?

Hi again, yes I meant Power Options (sorry for the translation but I have my os in a different language) and under that menu there should be a power plan named High Performance.

Regards

pat_agen
Specialist
Specialist

hi Nicolas,

I'd echo John's recommendations. We're on v8.5 and generally a document will take over 3 times its diskspace when being loaded into RAM so just by opening a 1.9G document your server with 8G of RAM will be on its knees.

The default setting of qvs limit the qvs.exe process to 70% of vailable RAM anyway so just with this the system will be under stress. Given your description of processor activity I'd say the system is working overtime due to the size.

So first option is to see if you can add more RAM - not the most expensive thing to do - and it might just do the trick.

However I'd challenge your functional team on the design. Who is the application for? What questions is it set up to answer? Who needs to drill down to individual invoices? How often do they need to do it? What information is at an individaul invoice level that they need to see?

The above is not to say that this shouldn't be done or has no sense but by pursuing those questions you will be helped in defining your architecture.

Look at your stats. 3 years of data, 45m invoices, 48k customers, 5k products. That is a lot of activity to analyse. So who is going to be digging down to the bottom level?

Do you penalise your top execs who want a nice dashboard or do you build a seperate document as you describe for those who need to drill down to the lowest level. Can the lowest level be split into different areas of responibility - geographical? organisational? -  or by date as John suggests?

Qv have introduced document chaining which in theory will help you with the idea of drilling through from one document to another. This isn't available in v8.5 but what you could do is build several lowest level documents and then have one button which will launch the detailed document and by playing with current selections, userids etc. "know" which detailed document to launch when a request is made.

anyway bon courage and thanks for posting - it is an interesting theme