Hi All - just a quick question about using ODBC. I pull in several tables from an ODBC data source. A few of the tables are quite large and I'd rather not pull in excess data. Would anyone be able to share some best practices on how to limit the data, whether it's in the script or some other procedure behind the scenes??
Really appreciate the input!
Thanks! George
As Clever suggested you might want to look for Incremental Loads or if you are just concerned with specific years.
Use like:
LOAD .....,
.....
...... ;
SQL SELECT alyourfields, field1, field2.....
FROM yoursource
WHERE Year = '2016';
Something like this?
I would limit the data in the script. You can use your table joins with where clauses and only pulling those columns which are necessary.
It's not just the columns I'm concerned with.........for instance, one table has data all the way back to 2013. If I wanted just 2016 would you suggest using the same method to extract. I'm a Qlikview rookie learning on the fly.
Please check Qlik documentation about "Incremental Load" that´s what you need
As Clever suggested you might want to look for Incremental Loads or if you are just concerned with specific years.
Use like:
LOAD .....,
.....
...... ;
SQL SELECT alyourfields, field1, field2.....
FROM yoursource
WHERE Year = '2016';
Something like this?
Dear George,
I would recommend to develop Qlikview Application in 2-Tier or 3 -Tier Layer,
First Layer should be Db-fetch Layer where we have to load the Data from ODBC connection without any Transformation in the script and store each table in a QVD.
In this layer we can perform Incremental load on the Transaction Tables to reduce the Load time
and comment the fields which are not required in the dashboard.
Second Layer should be Transformation Layer where we have to load the data from QVD's which are generated in the DB-fetch Layer by using those QVD's prepare the Data Model and again store each table in a QVD.
Third Layer should be Application Layer where we have to load the Optimized QVD's which are generated in the Second Layer and Front end Application development will be done.
Thanks,
Mukram.
The other responders are correct.
In a initial query, You'll want to pull the data you need from your ODBC connection and store into a QVD.
Then you can use an incremental load process that Clever pointed to to only query smaller amounts of data and add it to your QVD.
Then in your apps pull the data from your QVD. Be sure to use an 'optimized' QVD load to make you reloads fast.
Thank you all for your input!!!!!
IMHO Let the db engine do what the db engine is best at and let QV do what QV is best at. Depends on your definition of 'excess data' (is it rolling last 3 years you want, is it a set date?)
I usually create a variable like 'StartOfTime' and have the script calculate it (for instance:
LET StartOfTime = Num(MakeDate(year(Today()-(3*365)),01,01)); will calculate jan 1st three years ago. Then I use this in the WHERE par of the SQL statement that gets sent to the ODBC database, i.e. trunc(e.start_date_time) >= to_date('$(StartOfTime)','YYYY-MM-DD').
Also, I try to combine the tables and data in the SQL statement rather than pulling in all the rows and columns from all the tables I need and combine then them in QV. This can seriously limit the amount of data you are pulling through your network connection and you have you db engine doing the work for you in stead of QlikView during load time.
HP