Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hey,
I have built an application that will be updated overnight and reduced to different QVWs based on field value.
However, I'm now considering whether to create different QVD files (one per QVW) or one large QVD that will be split up in different QVW's using the task described above.
I'm afraid the QVD will become very large, and because the QVWs will be updated at different moments during the day it might be hard to schedule updating the QVD. At the moment I'm leaning towards creating multiple QVD source files.
Can anyone offer his thoughts?
If I opt for creating multiple QVDs, then how do I create one QVW that I can update for each file (as you can imagine I don't want to maintain 100 QVWs that are exactly the same except for their source data?
I have uploaded the model as I think fits my purposes best, however implementation on the right hand side (QVD->QVW) I could use some help on.
Hi Daniel,
A QVD can contain only 1 table. So if you want to use 1 QVD per QVW you have to put your entire data model for that particular QVW in a single QVD. That will likely result in a very big QVD file and also in the front-end of the QVW it doesn't help performance because you need DISTINCT if you want to calculate unique values. That's is also not so well for performance.
What about loading multiple QVDs into a QVW and reduce that with Publisher? Depending on the size of the document, the data reduction and the number of users you could also opt for data reduction on opening using Section Access.
Regards,
Kjeld
Sorry, I should rephrase my question.
At the moment we are loading 10 QVDs into a QVW file and reduce it based on field value. However the QVDs are growing exceedingly large (as we keep on adding new databases). Is it possible to create separate QVDs (one per database) and then load them into corresponding QVW files (again per database, so reduced on database ID)?
It seems like we should loop over each QVD, and then reduce on field value, but in doing so the QVW will still take up loads of RAM. Creating one QVW file per database on the other hand will result in increased version management.
So the question: how can I create QVD's per database and then the corresponding QVW (per database), while having one master QVW?
Daniel
I presume for your 'one master QVW' you wish to load all the discrete database QVD's into it.
You can use a wild card when loading QVD's. So if you have 3 QVD's called say:
Then
load * from 'DATABASE_*.qvd' (qvd);
Should load all three of them.
Best Regards, Bill Markham
Hey Bill,
the problem is I don't want to have a Master QVW, because of it's sheer size.
What I want is a set of QVD's per database, one QVW that loads one database, publishes it and then loads the next database..
If you have a Publisher license you can create QVDs without the need to create a QVW.
In the QMC you can go to System > Supporting Tasks > QVD creation.
You assign a folder as both a mounted source folder and user documents folder containing the file(s) loading the QVDs. The file can be reloaded in place, so it will be visible on the AccessPoint.
This is a bit of an odd solution though. I don't consider the same reference of a source folder and a document folder to be a best practice.
Regards, Kjeld
Daniel
I have some QVD generators that loop through various Oracle Users on various databases.
I use a spreadsheet to hold all the connect strings then read this into QlikView and loop round it.
In the example below you’ll see that I construct a variable vQVDFile that defines the name and location for the QVD file for each database in each loop.
////////////////////////////////////////////////////////////////////////////////////
Connects :
////////////////////////////////////////////////////////////////////////////////////
LOAD
DATABASE,
CONNECT,
TYPE,
ORACLEUSER
FROM
[..\..\Config\Connect.STWT.xlsx]
(ooxml, embedded labels, table is Sheet1)
where Match ( ORACLEUSER ,
// 'US_M01_T5L' ,
'WE_M01_T4L' ,
// 'UK_M01_T4C'
)
;
////////////////////////////////////////////////////////////////////////////////////
// Start of the Loop
////////////////////////////////////////////////////////////////////////////////////
FOR vCounter=1 to NoOfRows('Connects')
let vLastLoadStart = now() ;
let vDatabase= peek('DATABASE', $(vCounter)-1, 'Connects') ;
let vOracleUser = peek('ORACLEUSER', $(vCounter)-1, 'Connects') ;
let vConnect = peek('CONNECT', $(vCounter)-1, 'Connects') ;
let vQVDFile = '$(vSTWTFolderQVD)$(vDatabase).$(vOracleUser).$(vFileNamePrefix).qvd';
let vLogFile = '$(vFolderLog)$(vDatabase).$(vOracleUser).$(vFileNamePrefix).Log.qvd';
///////////////////////////////////////////////////////////////////////////
// trace variables
///////////////////////////////////////////////////////////////////////////
Trace ;
Trace ******************** ;
Trace vDatabase: $(vDatabase) ;
Trace vOracleUser: $(vOracleUser) ;
Trace vFolderLog: $(vFolderLog) ;
Trace vSTWTFolderQVD: $(vSTWTFolderQVD) ;
Trace vFileNamePrefix: $(vFileNamePrefix) ;
Trace vQVDFile: $(vQVDFile) ;
Trace vLogFile: $(vLogFile) ;
Trace ******************** ;
Trace ;
//////////////////////////////////////////////////////////////////////////////////////
// BEWARE: The QlikView editor will red underline this, but it is actually ok.
$(vConnect)
//////////////////////////////////////////////////////////////////////////////////////
................... load your data from the database
.//..................... save the QVD, drop the table & wait 5 seconds to release latches
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
STORE MSO into '$(vQVDFile)' (qvd);
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
drop table MSO ;
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
sleep(5000) ;
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
.//..................... don’t forget to close the loop and drop the Connects table
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
next
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
drop table Connects ;
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
Best Regards, Bill Markham
Hey Bill,
that's exactly what I was looking for, and how I considered creating the QVD files.
The second step is I have an app that will be exactly the same for all databases, but I don't want to load all the separate QVD files into one QVW.
How would one go about letting a QVW looping through the right subset of QVD's and then publishing the corresponding QVW with corresponding rights to the server?
Kr,
Daniel
Daniel
I have used Section Access to have a single qvw dashboard, but only allowing specific AD Groups / Users to see their subset of data. This can reduce the data available to each AD User to what they are allowed to see, hence alleviating issues due the large volume of the total data.
See http://community.qlik.com/docs/DOC-1853
QV Publisher has its "Loop and Reduce" functionality. I have never used it though as Section gives exactly the results I need.
Best Regards, Bill Markham
Interesting piece of information but still not quite what I'm looking for..
Lets say I have 1000 customers, each with a DB and corresponding QVD file of 10GB. That will result in 1000 * 10 * 1.25 =1250TB of RAM use; if all needs to be loaded in one document, and THEN restricted by using section access. Right?
That's the issue, we simply can't load ALL data in one file and THEN distribute. As far as publisher goes, I'll have a look again, I only saw a reduce funtion, which still needs all data loaded in one file at least once.