Try this technique which you can use in your load script:
(you obviously need to adapt it to your tables)
// create sample data: costTable: LOAD * INLINE [ item, cost_date, cost A, 2012-01-01, 12 A, 2013-01-01, 15 B, 2011-06-14, 13 B, 2012-09-10, 10 B, 2013-03-01, 11 ]; // ------------------------------ // Find the max data value for eacj item and create a combined field _maxCostDatePerItem: LOAD item&'@'&Date(max(cost_date)) As items_max_cost_date Resident costTable Group By item; // Get Data for the row that matches the combined field (that is what the Exists() function do) currentCosts: LOAD item, cost, cost_date As cost_effective_since_date Resident costTable Where Exists(items_max_cost_date,item&'@'&cost_date); DROP Tables _maxCostDatePerItem, costTable; // Clean up any tables you dont need anymore (optional)
example_findvalue_forMax.qvw 145.2 K
// ---- create sample data:
,date#(cost_date,'YYYY-MM-DD') as cost_date
item, cost_date, cost
A, 2012-01-01, 12
A, 2013-01-01, 15
B, 2011-06-14, 13
B, 2012-09-10, 10
B, 2013-03-01, 11
// ---- right join with maximum cost_date values
right join load item, max(cost_date) as cost_date
resident costTable group by item;
// ---- done
your idea seemed very good - simple, and it would give me a new table where the keyfield ITEM_Nr should be unique and which I could thus join to another table to keep my data_model compact.
Unfortunately, however, it does not work:; After writing this new code (adapted of course) and reloading the script, that table has the same nr. of records it had before and my keyfield is still not unique...
There is probably only one bit missing - the script_progress dialog shows me that there are actually two LOADs for that table, the forst with approx. 20k lines, the second with just 3k lines - but after the reload has finished, I can see that the table still has those 20k lines...
What is missing to complete that operation? I cannot DROP the table so easily since there is no new table...
Thanks a lot!
Ah -. looking closely at your code confirms what our external consultant just told me: All fields in that table must be either the ones to aggregate or they must be in the GROUP BY clause, right?
That is another particular that's not mentioned in the book.
Is there any feedbyck possibility? there should be...
I made it now - almost.
I had to adapt my code to that general SQL principle, so I used just three fields and included two of them in the GROUP BY clause.
I did that in a RESIDENT load and dropped the original table afterward.
Now my new table has just a few thousand records instead of 20k.
However, the keyfield is still not unique: For some reason, there are still two records for certain item_numbers - and when I build that new field into my diagram, I don't get any value.