What's the optimal data model?
The structure of the raw data will play an important role when designing the data model but we as QlikView developers need to ensure that it is as efficient as possible.
If speed is the primary goal - aim for the star!
Deviation from the star should be a conscious choice and out of need. The application will still function, even if it isn't a star, but there's a cost and that might force you to save elsewhere in the application. Make sure it's a calculated cost.
Every link between tables consumes resources when activated, that's the nature of QlikView. More resources are used if there are many fact-tables. Every click will filter out the clicked value/values and then filter out all keys that will be used to find values in the next table and so on thru all tables. This consumes CPU-cycles and RAM and it will do so for each and every click so make sure it is as low-cost as possible.
If the data model isn't optimal for performance the application will run like a car with the parking brake on - not good for speed, mileage or the car.
Of course no synthetic keys and circular references should be present in an application with large data volumes and/or many users, if at all.
If memory consumption goes up AND down, on the server, it's not uncommon that the data model is the problem. It's an indication that QlikView need memory but it's not used for cache and therefore returned to the OS.
Concatenate and join tables to become a star...
Cheers from the Scalability Team.