Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi Qlik,
Is there any way (now or maybe in future developments) to see how much of NPrintings 17's own resources are being used and by what?
I've perhaps a 5th of the reports from NPrinting 16 ported into 17 and it routinely bottlenecks and stops working.
I want to move all the 16 reports over, but am not confident doing it when I'm unable to predict accurately what will and won't cause NPrinting 17 to crash.
Thanks in advance,
Chris A.
Hi Chris,
You can observe a lot by looking at QV server performace logs for NPrinting User (using Governance Dashboard, or Server Performance app). Your bottleneck may be QlikView server (not NPrinting server)
Depending on template types used to build reports, objects used to build them you can manage NPrinting performance. (Example: if you have XLS template and you use pictures to put charts in it it is better to use XLS charts build form data exported from QV as it will significantly speed up report generation process)
It also depends on connection type - i always use qvp connection although it may be starting tasks slower in the long term it is more stable and does not kill the NPrinting box. Local connection will force your NPrinting Machine to be able to open at the same time as many QV instances with the biggest applications as many cores you have on NPrinting Engine Box and have enough memory to run it all in parallel. In long term it will run out of the memory anyway.
As you have multithreading in NPrinting 17 you are able to put a lot more stress on QV server and the issue can be on its side. With NPrinting 16 tasks were automatically joining queue and were stuck there until it was its turn to run.
I think you can monitor processes and its consumption when running NPrinitng reports on NPrinting box and see which process is killing its performance.
Down to your question though - I have not seen anything coming what would suggest that NPrinting performance will be monitored on the level you ask.
regards
Lech
Lech -
Have you seen or heard about issues regarding NPrinting not releasing memory on the QV server after aborting a metadata reload task? I am working on a new QV data model and UI for it with charts that will be used for NPrinting and during some initial testing, something in my qvw is causing QV server memory to almost reach 100% when I do a NP metadata reload. When I abort the metadata reload, memory does not get released and continues to be maxed out causing the Access Point useless to our users and we are forced to restart the server. Any thoughts? Thanks.
Kris Hi,
No, I have not heard of such behaviour.
What i am interested in is why would your QV server consume so much memory when opening NPrinitng app - it is essentially using "open in server" method when you connect via QVP.
- Are you using QVP?
- Do you build dedicated app ONLY for NPrinting?
- Do you have any hidden sheets where there could be objects cosnsuming memory when not used properly?
- It looks like you may have some "always one selected value" fields or triggers or charts/tables or maybe something else what is causing huge amount of RAM consumption when NPrinitng access it. Keep in mind that NPrinitng will clear all filters when generating metadata!
cheers
Lech
Thanks for the info. The one bullet point I either forgot about it or didn't realize was that NP clears the filters to do the metadata reload. All your other bullet points were fine on my end because I knew about those already and planned accordingly. So after some testing with my app, I think I discovered that the amount of data that ends up in my charts (without filters) for NP is too much for the qvw to handle as when I had the app open, cleared the filters myself, the app choked and the memory spiked to almost 100% and I had to use the Task Manager to kill it. And I was able to duplicate that a few times. Thanks again.
If you still want to use this app - maybe try using calculation condition to mimic selected fields ) on charts Properties--> General --> Calculation condition). That should limit # of ram consumed.
regards
Lech