Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 
Not applicable

optimization techniques for qlikview

Hi,

i have a large database which   having 20 crores records for every sales org, So i have 3500 sales org ID like this and i'm fetching data for each sales org wise and making Qvd's  so for every Qvd  for a sales org its taking 6-8 hours , So overall its taking 18 days to reload the full Application and sometimes Reload task has been failing because of more time taking  and have to reload that again.

What should i do in that case? how can i optimize it?

1 Reply
Not applicable
Author

I'm not sure what you mean by "20 crores records for every sales org" It may be a term I'm just not familiar with or it may be a misspelling.

Assuming that "crores" means a gigantic amount of data that takes enormous amount of time to bring across the network and save to disk then I would suggest you use an incremental load approach if possible. By "if possible" I mean if your data has some type of a timestamp that says "I was last modified at this date time" or if your data only gets added to your database and you can utilize some type of primary key ID value that gets incremented. Here is the basic workflow of an incremental load and there are many more details you can find in Qlikview books:

Step 1: Do a full initial load of your data and then record some type of variable that says "The last thing or the last date and time I read data was" (one time only) and store that data into a QVD file.

Step 2: Select the data from your data source with a where clause that says "only give me the data since the last time I did an extract" (meaning the data that has changed) Now in memory you have a set of values that is the "changed" data.

Step 3: Concatenate to that in memory table of your "incremental data set" all of your existing data from the QVD file. When you do the Concatenate Load statement you must add the where clause that says "where not exists (Your Primary Key Value)" You do this because lets say you have an ID value of 1303 that you stored last month. That value is changed and you now have 1303 in your in memory set. You do NOT want to load the value 1303 from your QVD because that value has changed.

Step 4: This is an optional step depending on your environment and the rules surrounding your data. If your data base allows records to actually be deleted you MUST add this logic to handle that. Say value 1305 was loaded initially but gets deleted this week. When you do step 2 it pulls data changes, but has no way of knowing that value 1305 was removed. When you do step 3 it would then load the value of 1305 you had stored into your qvd because it was there last week. So now you have to figure out a way to remove it? But how? Here is where an INNER JOIN helps your cause. You can issue another query to load a list of just your primary key values. So it goes out and loads all of the ID's that are in the database currently. 1305 isn't going to be in that list. Once you have that complete set of ID's you simply do an inner join from that list with your RESIDENT full set of data and the nature of an INNER JOIN is obviously such that data must exist in both. So value 1305 is now removed from your mast set of data.

Step 5: Save your "incrementally enhanced data set" back to the QVD file as you have now added the changes since last time and have removed any data that is no longer in the database (if that was a possibility.)

This is only a rough sketch for you. If you want to approach using an incremental load I might suggest the book Mastering Qlikview which covers this topic in depth and contains a wealth of other great ideas or Qlik Sense for Beginners which was where I first read the GENIUS idea to use an inner join to remove the old values which was my stumbling block in some data sets.

Best of luck