Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 
Not applicable

How-to limit an QV application to number of CPU minutes and RAM usage

Does anyone know how to limit an QV application by CPU minutes and also by RAM usage?

I need to define a limit on how long a QV application reload will take on the server, how many CPU seconds/minutes/etc that the QV is allowed to run.

I also need to be able to limit the amount of RAM that the QV application will take from my server.

Scenerio:

1. for a QV application, I set a limit on how long the QV application will run on the server before it gets axed. I set QV application to fail if not completed by 30min of CPU time.

2. for a QV application, I set a limit on how much RAM to allow the QV application. I set to only use 8GB out of 16GB.

3. for a QV application, I set both a CPU and RAM limit.

Server:

- VM hosted Windows 2003 SP2 server 64bit Enterprise

- 16GB RAM, 100GB Hard drive

- QlikView v10 SR1, 64bit

ben

19 Replies
Not applicable
Author

Hi...
One option would be you can set document reload Timeout seconds in QMC under User Document Reload Schedule.

Not applicable
Author

Hi,

See here two ways how to kill by memory limit: http://community.qlik.com/forums/p/26373/101015.aspx

-Alex

StefanBackstrand
Partner - Specialist
Partner - Specialist

I can answer the three questions from a Qlikview functionality point of view, without proposing different kind of hacks to forcedly kill fully legitimate windows processes that are overloaded because of bad user behavior.

1. There are timeout limits in Qlikview Server Publisher for tasks, I believe.

2 & 3. These settings in QVS are not per document, they are server wide and are controlled by the Working Set Limits in the QVS performance settings. There are no way, to my knowledge, to limit the usage of RAM for a task in Publisher today.

I would generally not recommend to "strangle" the resources to a QVS server on a separate machine, since if defeats the purpose of the multi-threading behavior of QV core functionality. If you need to restrict QV to not interfere with other processes, move the other processes to a different machine.

Not applicable
Author

thanks Kumar, Alex and Stefan.

I'll follow up on the timeout limits, and lement the lack of per application limits in order to control QV memory and cpu usage.

While I think it's great that QV has compression, etc, I find it hard to imagine letting a QV query run away in a server env that has 16GB or more of ram and consume 100% of CPU use. I suspect that given a big enough sandbox you could gaurd against sloppy QV scripts. However, that's not an option for me.

Good responses, I'll follow up to let you know what happens...

Not applicable
Author

Hi,

One of my customer has several machines with 256 GB of memory, and 32 cores . There are developers logged in with Remote Desktop using QV developer tool, and scheduling tasks with Publisher.

Unfortunately, it is very frequent that somebody messes up, so one Windows process eats 99% CPU, and 100 Gb of virtual memory. At this point Windows is so slow that administrators can't even login to kill processes. The machine is down for everybody, files are corrupted, and people learn why it is a good idea to keep several versions of the same file .

--------

Unix and mainframe people discovered in the '70 that it is a GOOD THING to put limits on resources, and kill the few runaway ones.

There are not many Windows machines with average uptimes of years. Ever wondered why?

-Alex

StefanBackstrand
Partner - Specialist
Partner - Specialist

(unfortunately, I seemed to have marked this answer as the solution by mistake - sorry for that)

Yes, I know, I was responsible for troubleshooting them from our side. 😃 When the reason for process runaways is caused by such exceptionally empowered users as developers, then the behavior of these roles needs to be revised and changed, clearly. Killing processes might be a solution, but kills cache and non-persistent data in memory. Qlikview servers do use a lot of memory, and does not clear cache by design, which is why memory usage needs to be closely monitored and held back. I agree that the QVS service sometimes benefit from restart, but if you are pending around ~80-90% RAM usage all the time, one should consider to lighten the load or change the behavior on the machine, that's all I'm saying.

I've seen a whole bunch of Windows application servers who had uptime far beyond many hundred days per stretch. Maybe your experience with Windows servers aren't as extensive to have allowed you to meet such systems yet? I've managed IIS web servers (both Windows 2000 Server and Server 2003) that had 400+ days of uptime and just kept on rolling. It was quite a shame to shut them down when we needed to switch the hardware to more redundant platforms.

Not applicable
Author

When the reason for process runaways is caused by such exceptionally empowered users as developers, then the behavior of these roles needs to be revised and changed, clearly.

It is preferable to kill a runaway process than to argue with people about syntetic keys and missed project deadlines

-Alex

StefanBackstrand
Partner - Specialist
Partner - Specialist

..and in that case you end up in a situation where you need to do it over and over and over again.. It's all a matter of approach.

rwunderlich
Partner Ambassador/MVP
Partner Ambassador/MVP


Alexandru Toth wrote:
One of my customer has several machines with 256 GB of memory, and 32 cores . There are developers logged in with Remote Desktop using QV developer tool, and scheduling tasks with Publisher.
Unfortunately, it is very frequent that somebody messes up, so one Windows process eats 99% CPU, and 100 Gb of virtual memory. At this point Windows is so slow that administrators can't even login to kill processes. The machine is down for everybody, files are corrupted, and people learn why it is a good idea to keep several versions of the same file .


I suggest the problem is that developers should not be working on a shared server, rather they should be making "development mistakes" on workstations. I understand that hardware limitations sometimes force people to use this configuration, but it creates a liability in reliabilty and availability.


Alexandru Toth wrote:Unix and mainframe people discovered in the '70 that it is a GOOD THING to put limits on resources, and kill the few runaway ones


They did, and they still do. But they are using OS's that are designed for multiusers and provide sophisticated facilities for resource monitoring and capping. Windows is not in the same league. Your solution for killing runaway processes is clever and admirable, but not comparable to the techniques available in those other OS's. In most cases, I think it's more efficent to invest in adaquately sized workstations for developers.

All that said, you gotta deal with what you've got. But I would encourage customers to look at building workstations for developers and profiling apps before they get to the server.