Try this :
Frac ( round ( TIMESTAMP#(external_timestamp_field,'YYYY-MM-DD hh:mm:ss.fff') , 1/(24*60*60 ) ) )
I often test things like this by putting a test expression like this in a text and observing the result :
=time(Frac ( round ( now() , 1/(24*60*60 ) ) ) , 'YYYY-MM-DD hh:mm:ss.fff' )
& chr(10) &
Frac ( round ( now() , 1/(24*60*60 ) ) )
Warning: Do not leave an object with now() in an expression in it after you have finished testing otherwise it it will just tick over unattended continuously chewing up cpu.
Best Regards, Bill
I think both your formulae look good. But I am not sure I like the thought of aggregating a number that already has had the integer part cut off, and then is rounded...
If you sum a large number of timestamps, you will accumulate all rounding errors. And if they sum up to more than 24 hours, what timestamp will you get then?
If you average the timestamps, you will get strange result in situations where one timestamp is just before midnight and one is just after midnight. They will average to lunchtime...
A comment: Forcing format codes or not should not make any difference on the performance or the evaluation. I would use formatting, since I then easier will spot errors.
Uh oh... any time a question makes it onto your radar, it must be bad!
Regarding averaging times, that was mostly driven by desperation. I'm still struggling to parse millisecond-level times. I had tried to round down to the nearest second (destroying sub-second granularity). I was averaging data points within each second as a way to account for the multiple, millisecond-level data points. This was a poor choice to begin with.
My question above is still open... I eventually gave up with treating the times as numbers. I force them to all to be text. I work with forced-text time values down to the millisecond. At the very end, I'm going to try to parse them back into "real" times. I know this is bad, but it's the only way I can get things to work.
Do you still need the milli seconds in the timestamp then SET the Time stamp format to below, it keeps the milli seconds by default.
SET TimestampFormat='D/M/YYYY hh:mm:ss fff';
Even though if you can't use the below format still the field have milliseconds and you have to use the tiimestamp function to show the Milli seconds.
If you want to round the nearest second, use the Floor function like below:
Floor(TimestampField , 1/(24*60*60)) This statements round the milli second to nearest second
So... I had the luxury of solving this problem by running away from it. I was able to get back to the source of the data and change how it was delivering my timestamps. Essentially, my source file now has a MillisecondOfDay field than ranges from 0 (00:00:00.000) to 86399999 (23:59:59.999).
Had I not been able to solve that problem upstream from QV, I planned to build up to the MillisecondOfDay by isolating the hour, minute, second from the timestamp and multiplying each by the appropriate multiple.
Finally, I work with my MillisecondOfDay throughout, and only turn that back into a time (by dividing it by 86400000) at the last moment necessary. Whenever I do that, I add in a Floor(n,1/86400000) just to be safe.
Is there a special reason to keep your data at the millisecond granularity? Do you really need this fine granularity?
Having fact data with field values with a finer granularity than needed for your analysis will increase your data size, here is some more information about that:
Yes, I do need to preserve that millisecond detail... at least for a portion of the of the time I'm working with the data in QV. I just posted a question to the forum that goes into exactly that. That question is here: better way to get time-weighted average than join + peek + groupby?
If you have any input on the specifics, I definitely need the help! In the interest of message board clarity, please use the linked question above if you can speak to that question.
As always, thank you for your continued help! (You've responded to me in several other threads.)