Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
I'm hoping there's some Data Mapper experts out there...
I'm mapping an EDI 834 file to a SQL database table. Part of the EDI structure includes a "Sub-loop" that can repeat up to 6 times. In my case, it may repeat up to 2 time... and sometimes just once. My problem is that it's creating two rows and I want it to create one. On the right, I'm mapping to "Effective Date" and "Term Date" based on "DTP01" (see screen shots below).
Here's the EDI Loops in the Data Mapper, on the left. DTP can loop up to 6 times, depending on the Qualifier.
In the EDI file, it looks like this. "Plan A" may be listed twice with a different date range, and multiple rows for each HD segment is OK. However I wat the two DTP segments on a single row instead of two separate rows.
HD*030**HLT*Plan A*~
DTP*348*D8*20200101~
DTP*349*D8*20201231~
HD*030**HLT*Plan A*~
DTP*348*D8*20210101~
DTP*349*D8*20211231~
HD*030**HLT*Plan B*~
DTP*348*D8*20220101~
DTP*349*D8*20221231~
My end result comes out like this:
But instead of 6 rows I only want 3 rows. I thought about using tAggregateRow, but I don't have enough unique data values to consolidate. Maybe I could add a sequence counter that resets on the HD segment...? Not sure how to do that in the Data Mapper.
I'm guessing there's a way to modify the structure on the right so that they end up in one row instead of two... I've tried a few variations but haven't gotten it to work.
If anyone has any ideas that would be appreciated! Thx
The tAggregateRow is probably the easiest way to fix this. Do you not have any unique key for each record in your EDI 834 file to use just to enable this?
I tried that, but the Data Mapper duplicates every value in each row, so I lose any kind of unique key when it spits it out. The only difference between row 1 and row 2 is the effective date and term date, as shown in my screen shot. If the effective date was repeated, I could key off of that. But it's null in the second row.
The tAggregateRow will still work. You do not need just one completely unique value. You can use multiple columns to achieve this as well. Unless all of your rows are the exactly the same, you should be able to aggregate to solve this.
However, I am aware that I may be misunderstanding what you mean here. So could you give me an example of the data (nothing private, provide edited data if necessary) in a table format. Can you identify how the multiple rows should be condensed to single rows as well?
I'm not sure how to paste in a table format - below is an image of the rows coming out of the THmap. Further below is the text -
a copy/past of the results as they come out of the THmap. The header row is included. If you copy / paste into a Notepad++ or something similar it will be easier to read.
As you can see, the THmap is duplicating everything except what's in the dates loop. The only thing that's different is the effective and term dates.
Application_Sender_Code|Application_Receiver_Code|Group_Control_Number|Transaction_Set_Control_Number|Transaction_Set_Reference_Number|Insurance_Company_Code|Relationship_Code|Subscriber_ID|Last_Name|First_Name|SSN|Home_Phone|Address1|Address2|City|State|Zip_Code|Birth_Date|Gender|Marital_Status|Plan_Code|Effective_Date|Term_Date|Employment_Status_Code|Member_Policy_Number|
DENISES|TCS HEALTHCARE|100000002|0003|000000001|WHP|18|834Test-001|Name001|Test834|834Test-001|8341010001|1001 834th Street|Unit 101|AUBURN|CA|99555|1981-01-01T00:00:00|M|M|Plan A|2020-01-01T00:00:00|null|WORK_PT|Plan A|
DENISES|TCS HEALTHCARE|100000002|0003|000000001|WHP|18|834Test-001|Name001|Test834|834Test-001|8341010001|1001 834th Street|Unit 101|AUBURN|CA|99555|1981-01-01T00:00:00|M|M|Plan A|null|2020-12-31T00:00:00|WORK_PT|Plan A|
DENISES|TCS HEALTHCARE|100000002|0003|000000001|WHP|18|834Test-001|Name001|Test834|834Test-001|8341010001|1001 834th Street|Unit 101|AUBURN|CA|99555|1981-01-01T00:00:00|M|M|Plan A|2021-01-01T00:00:00|null|WORK_PT|Plan A|
DENISES|TCS HEALTHCARE|100000002|0003|000000001|WHP|18|834Test-001|Name001|Test834|834Test-001|8341010001|1001 834th Street|Unit 101|AUBURN|CA|99555|1981-01-01T00:00:00|M|M|Plan A|null|2021-12-31T00:00:00|WORK_PT|Plan A|
DENISES|TCS HEALTHCARE|100000002|0003|000000001|WHP|18|834Test-001|Name001|Test834|834Test-001|8341010001|1001 834th Street|Unit 101|AUBURN|CA|99555|1981-01-01T00:00:00|M|M|Plan B|2022-01-01T00:00:00|null|WORK_PT|Plan A|
DENISES|TCS HEALTHCARE|100000002|0003|000000001|WHP|18|834Test-001|Name001|Test834|834Test-001|8341010001|1001 834th Street|Unit 101|AUBURN|CA|99555|1981-01-01T00:00:00|M|M|Plan B|null|2022-12-31T00:00:00|WORK_PT|Plan A|
Is there anything else you can extract from the file to identify what should be a single record? From looking at the data you have in the table, there are actually only 2 rows required to include all of the data. By that I mean that even if the rows are different, they essentially will all show only 2 records that are not identical at the end of the process of removing the nulls.
Sorry for the delay - was away over the weekend and busy this week! Below are all of the records in the file. All of the information I'm getting is for a single transaction, or a single record. The only part that repeats is the HD segment, and that "converts" the one line into three lines because HD is repeated 3 times. With the HD segment, the DTP segment repeats - once for a "start date" and once for an "end date". That turns the 3 lines into 6 lines, but "start date" is populated on one line and "end date" is populated on the next line.
I don't know if this helps or perhaps I'm misunderstanding what you're thinking. 🙂
ISA*00* *00* *30*366441012 *ZZ*TCS HEALTHCARE *200801*1500*^*00501*100000001*0*T*:~
GS*BE*DENISES*TCS HEALTHCARE*20200801*1500*100000002*X*005010X220A1~
ST*834*0003*005010X220A1~
BGN*00*000000001*20200801*1500*PD***2~
N1*P5*Wonderful Health Plan*94*WHP~
N1*IN*Wonderful Health Plan*94*WHP~
INS*Y*18*030*XN*A***WORK_PT**N**~
REF*0F*834Test-001~
REF*1L*Plan A~
NM1*IL*1*Name001*Test834*E*MR*JR*34*834Test-001~
PER*IP**HP*8341010001*CP*8342020001*WP*8343030001~
N3*1001 834th Street*Unit 101~
N4*AUBURN*CA*99555****~
DMG*D8*19810101*M*M*7*1**~
HD*030**HLT*Plan A*~
DTP*348*D8*20200101~
DTP*349*D8*20201231~
HD*030**HLT*Plan A*~
DTP*348*D8*20210101~
DTP*349*D8*20211231~
HD*030**HLT*Plan B*~
DTP*348*D8*20220101~
DTP*349*D8*20221231~
SE*17*0003~
GE*1*100000002~
IEA*1*100000001~
OK, I think I see the problem. There is a way around this though IF the data is being returned in order. By that, I mean like this.....
Record 1 Effective Date
Record 1 Term Date
Record 2 Effective Date
Record 2 Term Date
Record 3 Effective Date
Record 3 Term Date
If that is the case, then you can use a tMap and a couple of tMap variables to create your data a key to use for the tAggregateRow.
Create tMap variables called "record" and "record_key". "record" will be the first variable and will have code like this...
routines.Numeric.sequence("rec", 0, 1)
"record_key" will be the second variable and will have code like this....
Var.record - (Var.record % 2)
I knocked an example together and it looks like this...
What the "record" variable does is count each row that enters the tMap. The "record_key" takes the current "record" number and uses the modulo operation to find the remainder when you divide the number by 2 and subtracts that from the original value. You will end up with values like this....
0
0
2
2
4
4
6
6
You can then use these values with the tAggregateRow to group your 6 rows into 3.
Ahhh, very clever. I was actually trying to do something like that using SQL and the modulo operator. Where I got stuck is when there is no term date provided... (in this scenario the term date is open-ended). For example:
Record 1 Effective Date
Record 1 Term Date
Record 2 Effective Date
Record 3 Effective Date
Record 3 Term Date
What I ended up doing is using the LEAD() function in sql and joining to the same table. I'm guessing tMemorize might help to do something equivalent. Here's my SQL:
UPDATE my_table
SET end_date = next_end_date
FROM (
SELECT id, start_date, end_date,
LEAD(end_date) OVER (ORDER BY id) AS next_end_date
FROM my_table
) AS a INNER JOIN my_table ON a.id = my_table.id
WHERE a.Start_Date <> ''
DELETE
FROM
my_table
WHERE Start_Date = ''
This worked whether there was an end_date record or not.
Thank you for your helping in trouble shooting - I'll try your idea as well and see what happens when there is no end date record.
Not a problem. Sometimes a different pair of eyes on a problem can really help. I hope that this works. I hadn't actually considered the "no end date" scenario. I'll be interested to see how it turns out.