Skip to main content
Announcements
Defect acknowledgement with Nprinting Engine May 2022 SR2, please READ HERE
cancel
Showing results for 
Search instead for 
Did you mean: 
srinivasotis192
Contributor
Contributor

NPrinting On-Demand report generation time is too long using November 2018 version

Hi,

We have upgraded Nprinting from June 2018 to November 2018 recently and we found that there is no difference in the performance to generate On-Demand report results. In the first attempt, the generation of on-Demand report taking long time (22 min for 500 MB application) and after that taking very less time to generate (5 sec) as it is cached in the first time.

Please suggest that how to increase the On-demand report generation performance in the first attempt? 

1) QlikView app performance also good on access point

2) CPU & RAM utilization also normal on the NPrinting server

3) we saved Qlik documents in normal mode only (Not in Web View mode)

4) We also requested for extra node for Engine (right now single engine used which is installed on NP server)

5)  NPrinitng server Configuration : 128 GB RAM, 16 CPU cores

Please suggest solution and provide article if any 

Thanks,

Srinivas

1 Reply
JonnyPoole
Employee
Employee

After NP Services are restarted , the first time to connect to a QV app , whether its a metadata reload, publish task, on-demand report, or preview from designer will take longer since the QV desktop must open the QVW first. Once it opens the QVW it will leave the process which is called a "content resolver" open (QV.exe) with the app open until its needed for another app or the services are stopped.  You can have as many qv.exe processes running at the same time as logical cores on the NP engine machine.  A long running report may spawn (sequentially) multiple qv.exe processes . 

To "warm" the system try running this same report as a task periodically.  Use a filter to keep the task light , but at least this way one content resolver will be available for on-demand.  

You can also try to give "extra warmth" by spawning multiple content resolvers to the same app, so that concurrent users can not have to wait.