There's no limitation, however performance improvements taper off above 4 processors. The reason, in short, is (1) the processors need to communicate and (2) memory is divided into "local" and "remote" areas. The costs of processor communication rise exponentially with more processor cores. The cost of remote memory access is far higher than local memory. More cores may be a better option because memory is local to the processor, and cross-core communication is much faster than cross-processor.
I´m working and mentioning in a tera data enviroments. More machines with Qlikview, more costs with licensing.
I´m simulating a enviroment with 400 users, without data reduction or delivery/distribution because the application will have more than 1.5GB.
How I´ll recommend 5 QVS with 4 processor if I can buy a 3 servers whit 8 processors?
I found a document that didn´t recomend this, so I need a big enviroment.
I think you are missing the point that processor are differente than cores. Modern Servers have 4 processor but each one can have 8 cores. 4 * 8 = 32 cores. Large enough for 400 users, considering a 10 % or concurrency ( 40 users simultaneosly connected).
In other hand, creating an application with a QVW with 1.5 GB, sounds a huge application. There is any possibility to break it in smaller pieces and drive the user between them using document chaining ?
The answers have been posted by Pablo and Jay.
There is no limitation on CPUs or Cores for a single server.
A single CPU can have up to 10 cores (maybe more?)
There is however as Jay mentioned some drawbacks when using servers with more than 4 CPUs.
Mainly the amount of instructions and data shuffled between the CPU is getting limited by hardware bottlenecks.
The general recommendation is to use 2 or 4 CPU servers due to this.
Hampus von Post