Let me please preface that I'm new to QV.
I want to join SAP tables (stored in separate QVD's) BSEG and FAGLFLEXA on Document Number (BELNR) in QV. This is totally possible (tested with limited rows), but it takes over an hour for QV to join the collectively 130M rows. Can i make an AutoNumberHash128 of each BELNR field and expedite the joining process? It's not seeming to help much. Another hurdle I'm facing is that there are often several debits and credits associated with the accounting doc. numbers (BELNR), so the document # is replicated inside the QVD. I'm fully aware that QV stores the redundancy into one instance in the RAM. Will AutoNumberHashing the document #'s disallow for this and create a memory reference for every instance? My .qvw is already blown up like a balloon. Is there a function to create a unique hash for each one unique set of the document # in the QVD? Any suggestions would be much appreciated. I've been stuck here for 20 minutes now:
The join hasn't even commenced. I'm assuming it's still processing exactly where to perform the joins.
I should have included that I'm making a concatenation of fields [(CC.BSEG & '_' & PC.BSEG & '_' & CE.FAGLFLEXA) AS ....] to match a necessary mapping file for the to creation of a financial statements report. As far as I know, it's absolutely necessary to join the tables in order to stick fields from two different tables in a concatenation like this.
You can use a few columns from FLAGLFLEXA as a lookup (mapping) table during LOAD. But that depends on how many concatenations you need, and how complex they are.
Big data sets always lead to compromises...
I'll give that a try and see if i can't figure out how to make that work. Essentially, the three fields [(CC.BSEG & '_' & PC.BSEG & '_' & CE.FAGLFLEXA) AS ....] match a column [....] in the mapping file. I'm pulling in the three fields from BSEG and FAGLFLEXA. The exact same data for the three fields are in the mapping file in the format: CC_PC_CE. This is all the concatenation i need to be able to link the mapping and the pulled fields. Will look into mapping loads and mark your response as correct if i can figure out how to get this to work. Thanks a lot for responding.
This is my first attempt ever at using mapping load. The error on reload is that a 2-column input is required. I'm not quite sure I'm following exactly. Please make it leave me alone.
I don't know where my head was last night, but thinking about your case a little longer made me realize that there may be a "simpler" solution.
BSEG is a transaction table. It doesn't change, it only grows. Why reload those 64 million lines over and over again? Wouldn't it be better if you load BSEG Incrementally (just the new records) and JOIN the extra field(s) to the increment, before adding the result to a backup BSEG copy that you keep in a 2-nd data level QVD?
A mapping table should have at least 2 columns (all columns after the first two will be ignored). The first column serves as value lookup column. Applymap() will look for a specific value in this first column, and - if found - return the corresponding value from the second column.
I would suggest you do something like this (pseudocode):
BSEG.PCTR & '_' & BSEG.KOSTL & '_' & applymap('Lookup', BSEG.BELNR) AS Concatenate
Very close to making this work. there's some blanks coming in on our cost elements, which shouldn't be the case. I don't think anything with a cost center is allowed to lack a cost element in SAP, not sure though.
Note: Screen shot (data came out in the correct format [ie. _ _ _ ]) is grayed out because i commented out the 'Concatenate' field from the mapping spreadsheet mentioned above in order to see just the table data coming in from the mapping load (below).
I'm not sure if this is an issue in the script (maybe the NULLs in cost element need to be hard coded to spaces, too; or could be an issue with the look-ups), or if this is an SAP data issue etc. Also, the 'Document Number' field is linking BSEG and FAGLFLEXA. This could be causing an issue with the script calls, but I really hope not. I think i'll have to mark your answer as correct as soon as I get this together since its really close to working and you put me on the right path. Thanks a lot for your help, Peter.