Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello
I have two files, the first with unique claim number (one record per claim) and cost, and the second with non-unique claim numbers (possibly many records per claim) and a code. When I load them in to QV and sum the cost by claim number, the values are multiplied up by the number of duplicates on the second file. For example:
status:
load * inline [
claim, cost
A, 100
B, 150
C, 120
D, 90
];
status2:
LEFT JOIN load* inline [
claim, rule
A, 1
B, 1
B, 2
C, 1
D, 2
D, 3];
In this example if you create a table to sum cost by claim B comes out as 300 and D as 180.
How do I resolve this?
Many thanks!
Hi,
Since the two table get's internally joined on claim the rows get's multiplied no of times the claim occur's in Status 2 .
I am not sure about it but Try to use Aggr(Sum(cost),claim).
Regard
Amar
Hi,
Since the two table get's internally joined on claim the rows get's multiplied no of times the claim occur's in Status 2 .
I am not sure about it but Try to use Aggr(Sum(cost),claim).
Regard
Amar
Hi, why are you joining the tables? because if you change the join by an applymap() function the result will be the same (B 300, D 180)
But if you don't join the tables the result will be
Sum (cost) | |
claim | Sum (cost) |
---|---|
A | 100 |
B | 150 |
C | 120 |
D | 90 |
H.
Hi Hector
What I currently do in my code is load in my master file and then "left join" all the other tables. The resulting files have synthetic keys. I don't really understand if this is a good or bad thing. What the best way to organise your files with a one-to-many assosciation?
Thanks.