Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
what is compressing factor of qvd?
Where do you get the 90% compression ratio from?
As I understand Henric, a QVD is storing information pretty similar to the way information is stored in the associative data model in the RAM, using symbol tables and bit-stuffed pointers. This way, no transformation is needed when loading the data from QVD file, so it will be as fast as possible.
If you store the data model to file using a QVW, you may enable some additional compression (on top of symbol tables and bit stuffed pointers, for example if your fact table shows a sequence of very similar records).
So, in summary, you need to define what you comparing, e.g. a csv-file as input to QlikView with a QVD containing the same data or are QVD with the resident table in RAM.
Besides this, have you looked into the blog post Rob linked in his answer? Together with
Symbol Tables and Bit-Stuffed Pointers
it should give you a quite good insight into QV data storage.
0. Qvd's are not compressed.
Then again, you could zip a qvd, so that it would be compressed and have a compression factor. No way to tell what that would be though since that depends on the data values stored in the table that was stored in the qvd.
Well, I believe this strongly depends on your data.
And factor compared to what?
QVDs are optimized not for size, but for loading speed. In fact, the loading speed is really the only advantage these files have... Hence, QVDs are often bigger than the corresponding QVWs.
HIC
Hey,
You said QVD's are optimised not for size but for speed.
So, should i assume that if i create a qvd then it's size need not be lesser than the size of data source from which we extracting data? I beleive that creating qvd's means it compresses data so, it's size will be smaller.
Plz explain.
Thanks
The qvd files are created using symbol tables and bit-stuffed indexes, but apart from this there is no real compression. No compression algorithm is used: no zip, no lzh. As a result, the qvd will most likely be smaller than the source data, but there is really no guarantee that this will be the case.
HIC
Thanks, but
1: does qvw on loading data from qvd compresses qvd by 90%?
OR
2: does qvd get compressed by 90% when we created qvd's ?
Where do you get the 90% compression ratio from?
As I understand Henric, a QVD is storing information pretty similar to the way information is stored in the associative data model in the RAM, using symbol tables and bit-stuffed pointers. This way, no transformation is needed when loading the data from QVD file, so it will be as fast as possible.
If you store the data model to file using a QVW, you may enable some additional compression (on top of symbol tables and bit stuffed pointers, for example if your fact table shows a sequence of very similar records).
So, in summary, you need to define what you comparing, e.g. a csv-file as input to QlikView with a QVD containing the same data or are QVD with the resident table in RAM.
Besides this, have you looked into the blog post Rob linked in his answer? Together with
Symbol Tables and Bit-Stuffed Pointers
it should give you a quite good insight into QV data storage.