Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
hi
when i saw the comparing video on youtube between QlikView and OLAP cube
i tried the movie example in the excel pivot table, i see that i can do what qlikview do
the question (which refer to the limitation of OLAP) was in the movie database what are the films done by Edward Sutherland and Ralf Harolde?
and i can answer this question in the pivot table just by rotation the qube and make the films in the columns and products with customers in the rows
the question is how qlikview different than olap cube in exploring data ?
thanks
Hi Harm,
The reason why I did the modelling inside QV was simple: I didn't have the option to use a data warehouse.
I had two disparate systems and no developer resource time. I also needed to get to 'answers' really quickly, so I actually went down the route of pulling data into QV without knowing what my final model would look like. I would create star schemas some of the time, but not always. (In an ideal QV world, as I understand it, you manage to get to just one consolidated table - ApplyMap() is really useful here!) Sometime, I had two large tables, with a small linking table (like date) - so no effort was spent to rationalise the data, but I was still able to see different dimensions play out alongside each other in a very loosely associated way.
I do agree with your method (and that things like visual ETL/tools already known is really valuable) and actually, if I had had more resources, I would have scaled to a proper data warehouse, just as you describe, but there's effort, time and investment there. So, for me, it was about getting the QV data models going, play about with results visually then go back and rebuild based on new insights. Oh also, we had so much development going on, the OLTP databases themselves were often changing - so I had to consider speed of output as more important than ongoing maintenance.
Dave
Thanks again for your insightful answer.
Ok, i understand now . So QV is really good to get fast answers and insights from the data. I do think QV can be very helpful, due the easy connections, to set up a quick dashboard and show some of the possibilities within. When the client likes it, a more refined model and infrastructure can be created.
I am sory if i'm boring you, but how often do you update the QV data files? Idealy i want to create a situation with a data refresh of max 5min.
I use a mix of load times, very much depending on which dashboard and therefore the target audience. For ops data that needed to be near real time, I worked on 15min loads. I found that was satisfactory for the purposes in question.
I guess for me the answer to how often a reload should happen is that it depends on what you need. If you need to track a specific KPI in near actual real time and it's a quick load then you could do it more frequently than every 5mins, but do bear in mind how the reload CPU usage will compete with dashboard usage on your server, if you only have a few CPUs.
Hi
There will be multiple usage cases. Most of the data for "free exploring" and the kpi's are acceptable when updated once every few hours. Some other specific production metrics should be updated every 5 minutes or so. These KPI's are indeed very specific and therefore small in size.
I think it is best to create a measure plan with the KPI's, underlying required data tables and the update rates. Also a plan with the update frequency for the undefined data used for additional exploring. Indeed it is best to minimize the real time data and free the CPUs as much as possible.
Thanks,
Harm