Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
The issue I'm currently experiencing stems from my need to historize my data - I want to be able to locally save each data pull from my connection to snowflake. This requires me to use both the date and time of the data pull, both variables are added as columns to the original pulled table - this is done via the data editor within the SQL script.
Currently, I have created a master KPI called TIMESTAMP_INDEX, which is able to rank all data loads based on their dtime and date, with the smallest time_index referencing the newest data load.
The code for this master kpi is: Aggr( Rank( Timestamp#(DATUM_EXTRACT & ' ' & UHRZEIT_EXTRACT, 'DD.MM.YYYY hh:mm:ss'), 1, 1 ), DATUM_EXTRACT & ' ' & UHRZEIT_EXTRACT )
The issue is that I can't seem to filter my KPI's accordoing to the latest data load, it just sums up all past data loads into one, a simple filter code snippet does not seem to do the trick:
Sum({ < TIMESTAMP_INDEX = {"=$(=Max(TIMESTAMP_INDEX))"} > } PLAETZE_GESAMT)
The visual on the left is based on the code above while the visual on the right featutes th correct numbers of the day based only on the newest date (if there are multiple data pulls on the same latest date this will also lead to wrong numbers and therefore needs to be replaced)
Does anybody have insights on how to solve this issue? - Thanks in advance.
I think I wouldn't do it in this way else evaluating the essential information within the data-model by using a real timestamp. A split of a timestamp into dates and times is recommended in scenarios in which (nearly) each single record of a larger fact-table contained a unique timestamp which then leads to a lot of distinct field-values. But if there aren't many millions/billions of different timestamps else maybe a few thousands the splitting-benefits may be lesser as the disadvantage to combine the dates + times later again.
This means I would store a real timestamp and then adding to them an appropriate dimension-table which is derived from the system-table of the timestamp, maybe something like:
t1: load Timestamp(fieldvalue('Timestamp', recno())) as Timestamp, recno() as RecNo
autogenerate fieldvaluecount('Timestamp');
t2: load *, rowno() as RowNo resident t1 order by Timestamp desc; drop tables t1;
Hello!
To filter your data to show only the latest pull, change your set analysis to use Min(TIMESTAMP_INDEX) instead of Max(TIMESTAMP_INDEX), as your TIMESTAMP_INDEX KPI assigns the smallest rank to the newest data load. The corrected code is Sum({ < TIMESTAMP_INDEX = {"=$(=Min(TIMESTAMP_INDEX))"} > } PLAETZE_GESAMT). www YourTexasBenefits com
Hello Mary,
you are right of course, I switched around between min and max, neither changed the result of the KPI.
Sum({ < TIMESTAMP_INDEX = {"=$(=Min(TIMESTAMP_INDEX))"} > } PLAETZE_GESAMT)
The real issue seems to be that my current code does not catch the timestamp_index at all
-> At the bottom of this picture, you will see that the brackets where usually a value would stand as a sort of preview for the filter, there is only this "=-". Ideally, there should be a 1 within those brackets as that is the smallest Timestamp_Index. I don't know what's causing this.
Help would be greatly appreciated.
I think I wouldn't do it in this way else evaluating the essential information within the data-model by using a real timestamp. A split of a timestamp into dates and times is recommended in scenarios in which (nearly) each single record of a larger fact-table contained a unique timestamp which then leads to a lot of distinct field-values. But if there aren't many millions/billions of different timestamps else maybe a few thousands the splitting-benefits may be lesser as the disadvantage to combine the dates + times later again.
This means I would store a real timestamp and then adding to them an appropriate dimension-table which is derived from the system-table of the timestamp, maybe something like:
t1: load Timestamp(fieldvalue('Timestamp', recno())) as Timestamp, recno() as RecNo
autogenerate fieldvaluecount('Timestamp');
t2: load *, rowno() as RowNo resident t1 order by Timestamp desc; drop tables t1;
Hello Marcus,
The filter can now display the current KPI numbers.
Thank you very much - this has solved my problem perfectly!
Cordially Joe