I am experiencing an issue with the where clause in a resident load. I noticed that Qlik Sense is not taking into account what is written in the where clause. I am copying an example below:
, Count(DISTINCT H_ID) as H_DKey_7days
WHERE (H_DayNumber>= 0) And (H_Days < - 7)
GROUP BY H_DATE,H_Weeks,H_Group,H_Employee;
, Avg(H_DKey_7days) as H_DKey_7days_Avg
GROUP BY H_Weeks,H_Group,H_Employee;
LOAD * Resident History7Days;
DROP TABLE History7Days;
After this in the result I see that H_DKey_7days has not taken into account what I wrote in the script (WHERE (H_DayNumber>= 0) And (H_Days < - 7))
I tried to write the where clause in the below ways and it didn't work:
WHERE H_DayNumber>= 0 And H_Days < - 7
WHERE H_DayNumber>= '0' And H_Days < '- 7'
Any ideas what I might be doing wrong?
I tried like this and it still doesn't work. I also made sure that the fields contain numeric values as I used the num() function in the main table...
Appreciate more thoughts on the matter...
Num() is a formatting-function but they wouldn't convert the value if it's a string else only returning NULL in such case. To convert a string you would need num#() with the proper format-pattern.
Did you try your conditions separately and/or with different values? Maybe the queried values didn't exists or at least not in combination with the AND operator between them.
Thank you for the idea, Marcus, I tried and it still doesn't work properly, but I am starting to think that the issue is not the where clause...
I am attaching the script, so that you can check it. Let me know in case you want me to clarify something.
Thank you all!
I think I would load the data without this where-condition. If the dataset is rather large and runs too long for multiple testing I would probably filter manually on something else - maybe some values from the H_Group. And then I would look on the data within a table-box and some list-boxes how the data look like and if their associations are like I expect them.
This must not be done completely against the whole script else uncommenting some parts or using exit scripts; to break the loadings to certain point and then to check the results. Often you will find with such an approach any unexpected data/results which lead to consequential errors. If it's more complex it's sometimes also helpful to load the origin data-logic and also keep the inbetween-steps of loadings by qualifying them like:
qualify *; x: load * resident Table; unqualify *;
so that you could check each record/transforming-step through the whole ETL process.