Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 
stevelord
Specialist
Specialist

are inline loads less demanding than mapping loads?

Hi, I was wondering if inline loads are less demanding than mapping loads/applymaps from a server memory/processing time perspective.  The difference is probably immaterial on a smaller scale, but I have to do one or the other on a pretty large scale for a particular task now (in terms of the # of mapping or inline tables I have to write in).

I have a task to make a qlikview that translates 75 columns of number values into 'plain speak' on files that have 120 columns and half a million rows.  Technically it's a simple task of mapping 1=good 2=bad 3=ugly though the actual messages are much more varied than that from column to column.

At most I will either be making:

-75 mapping loads and using applymap() on the related fields in the table, or

-75 inline loads with the untranslated fieldname/values and the translated field/values and letting qlikview connect those on common field names.

I think I favor the inline load approach because I can keep a pure table of the source data at the start and let qlikview connect the inline tables on the common fields itself.  And I wouldn't have to write 75 extra applymap() expressions/fields above and beyond writing tables into the script.  But I will go the applymap way if it's easier on the server and has a quicker processing time.  (And I will be checking how many of the fields can use the same mapping translations either way just cut down the number of tables I make.)

Advise welcome, thanks!

1 Solution

Accepted Solutions
Gysbert_Wassenaar

By using mapping tables and applymap you end up with a cleaner and smaller data model that will perform better. You can drop all the key fields when you've added the descriptive values to the target table(s). The mapping tables will be discarded at the end of the script automatically.


talk is cheap, supply exceeds demand

View solution in original post

7 Replies
Gysbert_Wassenaar

By using mapping tables and applymap you end up with a cleaner and smaller data model that will perform better. You can drop all the key fields when you've added the descriptive values to the target table(s). The mapping tables will be discarded at the end of the script automatically.


talk is cheap, supply exceeds demand
stevelord
Specialist
Specialist
Author

funny story I took the column that explains what the numbers mean  for each field and just it remove duplicates to drop it to 39 possible mapping tables for me to make.  The Mapping Load/ApplyMap approach may be more suitable if I'm just make 39 Mapping Load tables and applying those to the 75 fields - so I could apply a single mapping table to the five fields it is appropriate for.  The inline approach would need me to have 75 inline tables where the first field is the same as the corresponding field in the source data.  So.. mapping load is less script writing for me, but still might be more demanding on the server with the 75 applymap() functions.... Advise still welcome.

Anonymous
Not applicable

I am a big advocate of ApplyMap() and have always found it very efficient.

stevelord
Specialist
Specialist
Author

Thanks, so applymap is winning in terms of less stuff for me to write and going easier on the cpu and cleaner data model.

Last funny story, The translation information I was given was written in such an non-uniform way I found myself just going down the list of fields in script and transcribing the translations into the script as dual if statements.  Seemed more efficient to just write the stuff into the load rather than spend time rewriting it enough to be usable in mapping loads.  (I had originally given something like a 20 hour loe to cover cleaning up the translation tables, and now might clear it in a couple of hours by just mentally cleaning up translations as I write them into the script...)

Would 75 dual if statements be any worse than 75 applymaps?

Anonymous
Not applicable

The data will be loading from a disc somewhere, be it database, qvd.......

The disc latency will probably be the bottleneck. 

Your dual's / applymap's will be done against data which has already been loaded into RAM so will be quick, and will multi-thread as well.  Hence will most likely be a lot quicker than the disc latency bottleneck.

Old saying:  The convoy goes as fast as the slowest ship.

[May not be the case for a resident load though]

stevelord
Specialist
Specialist
Author

thanks, no resident loads.  just loading the file.  my dual if statements are written into to the script directly below the field they relate too.

Load GENDER,

     if(GENDER=1, Dual('Male',1),

if(GENDER=2, Dual('Female',2)) as psGENDER

From Table

(On an older version/SR of Qlikview, I found that fields made with regular if statements, while slightly less typing was needed, didn't function properly in listboxes and that dual if statements solved that.  Not sure if that's still the case.  Many/most of the actual fields translated have 4-7 values and I like how the dual if statement can be used to control how they are listed and such too.)

stevelord
Specialist
Specialist
Author

and I recognize my typing is pretty bad right now.  I think I'm short on sleep or something, but I'm not seeing as many typos in my script compared to these community posts at least.