Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi All
please see my data model which i currently use my dash board. i am using Oracle data base and i have used reference date method to assign the date range for each employee as per his/her employment date to leaving date. hence it creates around 12 Million records of a table. My concern is when i move to tabs in my dash board some of tabs are fail to load quickly and take about minutes while some loads to view so quick.
can any one please advise is there an issue with my data model or how can come such an issue with qlikview and to solve it.
since day by day the number of records are getting increased and i am concerned that the dash board will get more slow hence.
Hi Niclas
Appreciate your response a lot. Yes i have created a date link for each date that an employee employed.
actually the client wants to get the counts for weekly basis but not daily. seems your suggestion is good. i will try this as well. but may i know that is it not feasible handling such a amount of data in QV ?. or are there any thing we should consider that we can improve the performance of the dash board from the point of hardware, data model etc?
Hi,
Are you creating a date link for each date that an employee is employed? Is there a need for that level of follow-up? What if you do this on a month level instead, which would reduce the amount of data by roughly 30 times. (12M / 30 = 400K)
Hi Niclas
Appreciate your response a lot. Yes i have created a date link for each date that an employee employed.
actually the client wants to get the counts for weekly basis but not daily. seems your suggestion is good. i will try this as well. but may i know that is it not feasible handling such a amount of data in QV ?. or are there any thing we should consider that we can improve the performance of the dash board from the point of hardware, data model etc?
12 million rows is not that high a row count. performance depends on multiple things.
if you dont already start using Rob's Document analyzer. http://qlikviewcookbook.com/tools
to optimize the data model
i also suggest looking at history project the number of records . if it gets to some ridiculous 100s/1000s of millions of rows soon. then few options are
- restrict history in someway that makes sense (e.g. last 2 years )
- Split the app and use document chaining e.g. per division
Thanks a lot Niclas and Dilip. both suggestions help a lot. i managed to make the data redundancy and learnt something new from provided links.