Hello Goran - please see link below - as an example of what you speak off. This is not done with QlikView directly, but with QlikView Expressor and its Extension SDK.
(1) - Example and Sample
(2) - Introduction
Extending QlikView Expressor with the New Extensions SDK
(3) Samples and Tutorials
Hope you find it interesting
Senior Technical Product Marketing Manager
QlikView and QlikView Expressor
Follow me - @mtarallo
mtarallo: Interesting indeed, but this would work only during the reload phase - correct?
Or would the Expressor extension be available also when accessing the QV app via the AJAX client?
The use case I am looking into is for end users to kick off queries to semi-big-data databases (primarily Elastic Search, but maybe Cassandra or similar systems too) from within the AJAX client of a QV app.
The QV app might also gather some data from ES during reload, so that is interesting too - but without the possibility for end users to start queries from the AJAX client, the whole idea dies..
Yes, currently if you want to inject data back into the data model that is usually done during the reload phase. There is the possibility to leverage dynamic updates BUT it comes with a hefty performance cost.
This is also why you get the performance you do with QV, when we can index the data and create our symbol/bit pointers for you. To be able to create that on of the fly in real time we would have to re-index the entire data model on every insert to detect if the injected values existed before in the data model.
With all that said, yes you could 'bring' in data to Qlikview. The Google Big Query extension is a typical example of calling an external data source, a extremely big one at that, and then visualize that data in QV. It doesn't leverage the native charts however. We have discussed and looked into the possibility of server-side extensions that could run server side and create hyper cubes on the fly and leverage the standard components of QV which would be _very_ interesting. You never know what might appear on the horizon so keep your eyes open
Also be weary of perceived performance, when you put something semi-fast into a blazingly fast application like QV almost any query time will seem slow in comparison.
Google Big Query demo: Qlik Demos: See QlikView in Action | Demo.Qlik.Com
Ah, makes sense that data in QV would need to be re-indexed on the fly - that could certainly be a performance penalty for doing that.
Still, if the result set is just a few tens of thousands of lines (which is what I envision), that re-indexing might still be ok.
To sum up where I am going with this: using a big-data/semi-big-data source (let's say 10 billion lines) that offers fast query results (less than 10 seconds), with result sets that are 10-25k lines in size.
But if I understand you correctly you are saying that it is today not possible to inject data from an external source into QV's data model? Or is it as easy as just assigning values to Qv.Document.Object.Data.Rows, instead of reading from that object/field?
... and yes - I am eagerly keeping my eyes on the horizon too!
Ye it's not possible to do without DD at the moment if you want to get access to QVs associate logic.
If you were able to load in the dimensional values beforehand you could use that as a base for your extension, similar to how the Google Big Query extension operates, but the data is not persistent meaning we can't run it through the inference engine to be able to find associations.
But if your underlying data source has adequate query speed then I guess it's a lesser problem if you would have to update the data set to reflect changes. Think of it as a traditional drill out.
Yes you could but the performance and section access goes out the door then.
My best bet would be to utilize the direct discovery feature and read straight from a data source.
In theory I guess that would work with a custom connector although I have never tried it.
You should be able to reference the fields that was brought in by DD from a extension.