Use a loop as you are thinking. You can maintain the Region and Connection string in a table, inline or external as Bill suggested.
However, I wouldn't try to keep track of start/end range with variables, as the variables are in the QVW and that doesn't reliably match the QVD. Instead get the start/end by looking at the QVD each time. Here's a fast performing method to manage that:
I will commence spreadsheet / temp table experimentation tomorrow.
I expect variable management will be the biggest headache...so I'll be back here crying for help in no time!
For some added context, I attached my single-connection code.
JW_Sample-QV-Script.qvw 484.0 K
Running the jobs asynchronously would be preferred, but based on what I can find in the QV docs, an open connection is automatically disconnected when a new 'connect' statement is encountered (from the help doc on the 'disconnect' keyword). Is there a way to force parallel execution with multiple connections open simultaneously?
You can run tasks [aka jobs] in parallel, assuming you have enough server grunt to handle the load.
You could still have a single qvw for the loads with a parameter for Region so it picks up the connection string etc.. from your connection details spreadsheet / table for the row for that Region.
For keeping track of start / end dates I store them all in a qvd, concatenating on extra rows when needed. This qvd can then be read and variables set to the values stored in it.
I think I can wrap my head around the multi qvw implementation maxgro suggested (assuming I can set task dependencies in QV when scheduling the final/aggregate qvw), but I haven't quite figured out how a parametrized solution will help overcome the sequential nature of ODBC connect statements. Any chance you can manufacture some high-level pseudo code to demo the flow?
(Keep in mind this is my 2nd QV dashboard, with the 1st one dating back about 2 years, so I may at times be missing the obvious).
How about :
- Input parameter of Region to a single qvw
- Spreadsheet with 1 row per Region
- 1st column is Region
- 2nd column is the ODBC connection string for the region
- .. any other required columns
- In qvw read in spreadsheet row where Region = parameter region
- This will produce just one row
- Put the ODBC connection string into a variable
- Dollar expand this variable to execute it to do the connection
- Create a Publisher Task for each Region
- Passing in the Region as a parameter
- Run all these Publisher Tasks in parallel
- Have your dashboard reload Task dependent on all the Region Tasks having run successfully
I haven't implemented it yet, but I now understand what to do.
Also a big thank you to maxgro for introducing me to the insanely useful 'trace' command, providing an example of parameter handling, and rephrasing Bill's strategy (yes, it did help!).
overcome the sequential nature of ODBC connect statements
on your desktop open qlikview twice
makes 2 different .qvw doc, one with an odbc conn to the server regionA, the other regionB
you can reload the 2 .qvw doc in parallel because they are in different qv.exe
I would first try in this way; also I would add the final .qvw to merge the data from regional .qvw and prepare the data for the UI. Then, test in publisher (task dependency, etc....).
Will it works? I think yes.
I renamed your doc in JW_Sample-QV-Script B.qvw
I make an include connB.txt with just a line (replace with a conn to a server region) in the same folder
trace I connect to Region B;
when I reload the .qvw with just these lines (no reference to region A B C...)
LET region=right(subfield(DocumentName(), '.'), 1); // region from .qvw name
LET vQvdFile = 'Region$(region).QVD';
I get B region , B qvd, B connection. The only difference is in the name of the .qvw.
5 .qvw, 5 include, 5 regions
If I understand Bill's suggestion, it's one step better; he suggests (hope to understand, Bill correct me if I'm wrong) to use one .qvw and pass the parameter with the publisher (instead of 5 different .qvw).