Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
performance tuning...
I have a big, very big Fact table ~ billion lines, filled with highly granular data.
no dimention tables. just one big fact with ~20 fields.
this data is used in one chart object and, obviously... it chokes.
how would you recommend modifying the ERD, or perfomance tune the object?
it's basically a fact with a key of period-subscriber-promotion containing event data.
it was not meant for QV, but will splitting that giant table in 3 dims and one event based fact help performace?
after all... there will still be ~billion events.
thanks.
It's hard to give any detailed advice without knowing your data model and objects in detail, too, but here is a link to a thread with some general recommendations:
yea... I'm talking about 1 BILLION lines... not 30 million...
need something more powerful.
we are going to implement Hadoop soon as we are trasitioning to big data, but I have no Idea what kind of QV data models are equipped with handling this much data.
the suggestions made in your refferal are good practive for small-medium data sets.
on the hardware side we have servers running 24+ cpu's and 256GB+ ram...
We have run tests with this amount of data and although we have shown that it is theoretically possible, you need to be aware of some things that can invalidate the theory...
ALso
Read http://community.qlik.com/blogs/qlikviewdesignblog/2012/11/20/symbol-tables-and-bit-stuffed-pointers to better understand some of the QlikView internals.
Load 5% of the data and see how much memory QlikView uses and what the response times are. Extrapolate to get a feeling for what you will get/ what you'll need when you load all data.
HIC