Some random ideas:
Only load fields you need. Never load *.
Load optimized from QVDs. Use a single exists() on these loads if practical, with the most restrictive field. Restrict further using inner joins after the fact instead of putting conditions on the main load from the QVD.
Minimize the number of loads and joins.
When you need to add conditional logic to a chart, data model changes are faster than set analysis, and set analysis is faster than if().
Adding conditions to expressions seems to be faster than adding conditions to dimensions.
When doing set analysis, using a pre-generated flag with values of 1 and null seems to be faster than searching for or listing values.
Don't use date islands on large data sets. There are usually faster solutions, even if they may be more complicated.
When you need the distinct values of a field from a large table, for instance all of the dates, you can avoid loading from the table and get the values directly if you do it like this:
LOAD date(fieldvalue('MyDate',iterno())) as MyDate
Avoid macros if possible.
You absolutely MUST have enough RAM to load and process the application. In memory data models don't want to be swapped to disk.
Be careful not to duplicate rows during joins. In some cases, the application will still function properly, but more slowly. Of course in other cases it won't function properly at all, but that's more obvious, and not a performance concern specifically.
I have no hard numbers, but it often seems like denormalizing small tables onto big tables helps performance. So I wouldn't, say, have a product table with nothing but a product code and a product name on it. Go ahead and join the product name onto, say, the customer orders table, even if that means it's on a million rows instead of on a hundred rows. QlikView's compression will take care of it. The script will go a little slower, but you'll probably save time in the charts.