I have a very large table of transactions stored in a SQL table. We have been running a nightly job to extract the data and load it to a QVD (22 gigabytes). The job has a variety of other extracts as well.
Rather than use this approach, I am considering retrieving the data directly from the SQL table as opposed to dumping the whole table out and then putting it into memory. My thought process is that the where clause on the SQL statement would probably perform better in retrieving data in SQL Server. I would not be storing the data into a QVD, saving space, and hope that performance improves.
Are there any resources/articles that address this issue?
Do you need you 22Go QVD to be analysed in the same time? I'am not sure of that! I've done bug application until 500 millions rows. That's not because QV is in-memory that you have to put all in memory ;-)
Consider incremental Load and split you QVD by period for instance then you will have flexibility for analyse and performance in the extract.
Think also to document chaining, I think not all user has the same needs so you can optimize the way the data is loaded/aggregated depending on user typology.
Other way is direct discovery, see 11.2 SR5 for latest improvement. So globally, you mount in the QV app a aggregated data set and you retrieve on the fly details row by direct sql in you database.