What about a billion of records, or even more?
Btw. the fastest load from Hadoop into QlikView is loading flat files from HDFS. This is unbeatable but a bit unflexible also.
After that, the JDBC connect to Hadoop (via Hive 0.8.0) gives more flexibility - like every SQL database - but on a reduced performance. I presume it's still much faster than a DataRoket web service connection. And it's unlimited concerning the size of record sets!
I would like to see your case study on loading a billion records into QlikView on a single load.
The case study mentioned above was pulling 800M rows of data from 44 BO Universes in under 2 minutes, in which QlikView fine queried 5M for immediate analysis. This was against a production environment, on revenue data during business operational hours. DataRoket queried and loaded data 10x faster from the BO Universes than InfoView itself.
We have tested on a 2B row BO environment with the same respective results. So far we have found no limitations on scaling to volume, number of universes, or users. And all on low end hardware with no QlikView performance issues.
Our case study on Hadoop is that we feed almost 2x faster than the innate Hadoop transfer rate, asscoiating data from 8 separate disparate data sources. That is feeding Hbase, Cassandra, and Hive. Connecting to Hadoop has the same results. Pulling from HDFS faster then Hadoop can. Again, with the ability to associate any data with Hadoop data. Again, on low end hardware.
Testing and benchmarking soon on high end hardware to see where our limitations are....