Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi All,
I'm having a strange problem in my app recently. A master measure sometimes return the true result and sometimes shows a number 5 times bigger than the true result.
Here is my formula:
I tested each argument and the error seems to be in wildmatch( aggr(concat(distinct buyer_email,';'),project_id), '*' & subfield(OSUser(),'=',3) &'*')>0. However, even when the total number is wrong, in the view for each project, the individual result for the project where highly_confidential=1 is still correct.
Please let me know how I can fix this. Thank you very much!
Hi, it works when I change all the if conditions to set analysis. However, I'm not sure where the duplication from or why set analysis works. So if anyone has the same issue and has more time to research, please look at the Section Access approach recommended by Marcus.
I think the data-model and/or the data-set isn't suitable for your wanted approach respectively the associations between the tables doesn't be fit.
Beside this you should switch the authorization itself to the data-model because everything in the UI is only a kind of an usability but not suitable for highly confidential data. More background to the matter is here: Section Access - Qlik Community - 1493681
I am 100% with @marcus_sommer on this. Front-end security is just security through obscurity. Users with a modicum of know-how can generally get around it.
Thank you, @marcus_sommer and @Or. This is a great idea. I was not aware of this approach until now. However, modifying the data model will take time. Do you have any suggestions for a quick fix?
A quick fix is rather unlikely. There are various reasons possible for duplications. The simplest (and officially recommended) way is to use a star-scheme data-model by replacing any joins with mappings to extend the facts.
Independent from it is section access which needs time to comprehend the logic and syntax as well as the necessary measurements. Most important is to have enough BACKUP's without a section access and to start the learning and development not with an origin application else a small dummy and adding step by step the needed complexity.
These efforts are no waste of time else a very worthful investment in your knowledge which will later save much more time.
Hi, it works when I change all the if conditions to set analysis. However, I'm not sure where the duplication from or why set analysis works. So if anyone has the same issue and has more time to research, please look at the Section Access approach recommended by Marcus.
An if-condition is evaluated on a row-level and a set analysis against a column-level respectively be working like a selection. Depending on the specific view against the existing data-model and data-set there may be various differences - especially in regard to the NULL handling.
IMO it's more an coincident that your view returned now the expected results as that the if-loop were the wrong approach and the set analysis the right one. A duplication of data hints very clear to real existing duplicates in the raw-data or afterwards created by any join or to a not suitable table-association (missing key-values on any side and/or not reflecting the needed granularity and causing cartesian results).
Charts are usually not a good way to detect the causes behind unexpected results else better are table-boxes with the relevant fields (those from the object-dimensions/expressions) and then reducing the sub-set of data with n selections to those records which are underlying of the unexpected results. Often are the issues then quite obvious - if not it could mean that the table-box missed an unique key-field - if none exists in the data-set they could be created per recno() and/or rowno() - maybe within each related table - and then be added to the table-box.