I know it has been a number of months since you posted this, I actually have just been made aware of this post form a colleague here at Qlik. Have you made any progress on the items, best practices mentioned in this thread? Have you sought information elsewhere in regards to your questions? I am curious. If not, I will attempt to gather this information for you, or point this in the direction of some of our consultants to see what they think.
Hello: I am sure we are not supposed to sell on this forum - I have been a member for a long time and a Qlik Partner for 10 years so I should really know (may I beg your forgiveness please moderator! - this is not a habit nor does it form any part of our marketing strategy). I could not resist responding to this especially in view of the dearth of responses. At QlickiT we developed the QlickiT Methodology in 2009 and we use it to ensure that all our consultants work in the same way so it is easy to pick up on other peoples work. The methodology covers: 3 tier architecture; standardised folder structure; standardised format for table/QVD and field names; concatenate or link table model; simple integration of data from multiple data sources to common dimensions and mapping to common dimensions; integration of section access and data reduction; simple consolidation of data from different sources (eg different ERP's in different divisions). We teach this to our clients and training courses open to the public. We have designed these courses to teach the methodology and Qlik at the same time so you leave with an idea of where to start your life with Qlik. The data/ETL bit is, of course common to Qlik and Sense because the script is the same. At the moment we are still running the courses based on Qlik because we have yet to finish converting the material to sense but we have implemented in in Qlik Sense with genuinely no changes (other than the data connectors but you can still go to "legacy mode". You will see I could not publish all the material on this forum but there are more details on our website where you can get details of how to contact us. This will definitely solve all your ETL problems - and modest though we are we do know what we are doing! As it happens - we do have courses running in Rotherham, Yorkshire UK very soon ie next week and we unexpectedly have a couple of places availble if your need is urgent - (Mon 1st to Wed 3rd Feb 2016). Do get in touch www.qlickit.co.uk. A brochure can be obtained here:
Hi Pablo et al, I have attempted to get some more insight for you on this. At the moment, I would assume it is done the same way with Qlik Sense as it was with QlikView. Creating separate .qvf files to create the QVD layers. Also I know that mbg has made some improvements to the Qlik Deployment Framework to include Qlik Sense. Let's see what Magnus has to say as well.
Please mark the appropriate replies as CORRECT / HELPFUL so our team and other members know that your question(s) has been answered to your satisfaction.
Hi Torben, and sorry for late reply. Did not see this until Michael informed me today. As Michael responded you could do the same in Qlik Sense as in QlikView. In QDF you use containers as store for the qvd files (in your case Default is a container). A container is a security boundary containing content, this could be QVD, csv, xls, variables, scripts and more that belongs to each other. In Qlik Sense container is attached using a data connection (LIB), these are secured using QMC. Containers include several folders, each folder has a correlating global variable, that is used in the load script to point out where to put the files. This is a nice way to create generic scripts that can be moved between environments (and from QlikView) without breaking code.
Hope that this helps
To your questions:
should we create a QVF called Extract, and then put this as the first job in our Task list? Better to have a name standard for extract QVF (could be Data source_Extract)
Should we create a separate Stream for these files. And who should access these? Create a stream with the container name of the QVF's that relate to this container
I would also create a separate data connection to allow Self Service users to access these files. Depending on the security you give the connection string, for me I think qvd extracts is better for self service.
Is there a preferred location and structure to store QVDs? As mentioned above use QDF
Is there a point in using a Content Library for QVDs? Because it can be synced across clusters, it could be convenient, but I'm unsure how it can handle large frequently updated files. No, content library is not a good place to put QVD files