Theoretically, it sounds like there more structure and thought you put into your process, the better it will perform.
In reality, it often becomes counter productive to "over engineer" your structure. For example, if you have a problem with one of your dashboards, you will have to troubleshoot 4 layers of logic and perhaps a dozen of different load scripts.
It sounds like many QVD files will be shared between applications. Yes... however, except for a few "popular" tables such as "Sales", and Master Data, vast majority of your QVD files will only be used once. If you build an elaborate structure for the sake of sharing QVDs, you will "pay the price" of working with an overly complex environment every time, yet enjoy the benefits of sharing only once in a while...
My two cents would be to simplify and only make it as structured as absolutely needed...
I am designing a 4 tier structure as described by Ashutosh Paliwal below. I
have over 100 source tables that will need to be generated into QVDs. Is there quick
way to accomplish this by looping through all tables within a schema and
storing them into one folder? If so, how do you name the tables unique names in
the store statement?
While your structure seems fine to me but Oleg is perfectly right, the more layers you make you will add more time in development as well as in maintenance,
also it will add more confusion.
What I follow and seen most of the developers around me follow is that
Layer 1: extract raw qvds from source tables.
Layer 2: use raw QVDs and do all scripting to get desired QVDs
Layer3: generate data model for your application (this is your final application without UI)
Layer4: use binary load from layer3 and make all the UI (Section access applied here only)
This is on a general basis and sometimes I may use one more layer or one less layer.
So, it all depends on the requirements but as I said more layers means more time to your pretty dashboard. (not always sometimes It helps also.), so keep it to minimal.
I agree with the above two statements. When we were developing our Best Practices we were set on a three-tiered model. But after almost four years of working with QV we've settled in with two-tiers for most applications. A few are one tiered and even fewer are truly three tiered. In fact, it's not unusual to have a mixture within an application: base table QVDs, specialized QVDs that are created from base table QVDs, and direct loads from Excel, SharePoint or small SQL databases.
There are multiple reason why to use binary load from the file.
1. If your user are going to download your qvw application they are not going to see any script written underneath.
This is required for security purpose of your script and to remove the complexity for users.
2. As Ashutosh explained that the layour in which you do the binary load will also contains section access logic. It is required for maintenance and security purpose. The section access tab will be kept hidden and if you need to change anything related to section access script, you don't need to run the whole datamodel script. You do the changes in section access, run it and you are ready to go rather than if you have your section access code written with your datamodel script, you need to run the whole script again. it is very time consuming. The binary load is very fast.