Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi,
Since I have an application with a large amount of rows it is necessary to split the original DateTime field into one Date and one Time field to improve performance. Before the split I was able to use actions/triggers to filter rows between vStartDateTime and vEndDateTime and the filter condition for the action "Select in field" could be stated as:
='>=' & vStartDateTime & '<=' & vEndDateTime
vStartDateTime and vEndDateTime could be changed through input fields etc.
My problem is how to filter the data with regards to Date and Time when the fields are separate? I have tried set analysis with the "Select in field" action but with no success. Using two filter actions for Date and Time will exclude time in between the dates so that I may get data between say 12:00 and 13:00 for both 2014-01-01 and 2014-02-02 instead of the data from 2014-01-01 12:00 to 2014-01-02 13:00.
Any suggestions?
Hi Jonas,
well, I'd say that is a situation where that efficiency-enhancing measure is not compatible with the functionality of your qvw.
Easily enough, if you have date-time-stamps with 12am and such with 1pm (or 12:00 and 13:00), then you cannot do without the 'HH'-part of the original field. So you'll have to re-concatenate the fields again, as precisely as necessary (I don't suppose seconds will be necessary?)
<=> I don't know whether splitting the fields in the LOAD and then re-concatenating them will actually enhance your
performance ...
Best regards,
DataNibbler
Thanks for the input!
Based on your suggestion I could include a "filtering key" based on part of the original timestamp, e.g.
Load
..
autonumber(floor(datetime, 1/24) as %DateHourKey
..
From etc..
, in my case I have a great amount of rows per day so this would greatly reduce the unique fields to filter through when the user makes a selection from the original 100M rows/year down to 365*24=8700 rows/year, or a bit more if I choose to allow filtering precision down to seconds.
This is a way to get around the problem but it would require expanding the data model and that is not something I wish to do unless absolutely necessary.