Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Join us in NYC Sept 4th for Qlik's AI Reality Tour! Register Now
cancel
Showing results for 
Search instead for 
Did you mean: 
ngl
Contributor II
Contributor II

Duplication when using if and wildmatch

Hi All,

I'm having a strange problem in my app recently. A master measure sometimes return the true result and sometimes shows a number 5 times bigger than the true result. 

Here is my formula:

sum( if( 
archive=$(v_archive) and onboarding_project=$(v_onboarding)
    and (highly_confidential = 0 or highly_confidential = 'false'
or wildmatch( aggr(concat(distinct buyer_email,';'),project_id), '*' & subfield(OSUser(),'=',3) &'*')>0)
, tcv_eur)
)
 
What I want is to show the total value, but if the project is highly confidential, only the related buyers can see it. 
 
There is a big data model, but there are 2 tables related to this formula
- project table contains the columns project_id, archive, onboarding_project, highly_confidential, tcv_eur, etc. They primary key is project_id.
- buyer table contains project_id, buyer_email, etc. One project_id can have multiple buyers, but not always 5, and 1 buyer can involve in multiple projects.
 

I tested each argument and the error seems to be in wildmatch( aggr(concat(distinct buyer_email,';'),project_id), '*' & subfield(OSUser(),'=',3) &'*')>0. However, even when the total number is wrong, in the view for each project, the individual result for the project where highly_confidential=1 is still correct. 

Please let me know how I can fix this. Thank you very much!

Labels (2)
1 Solution

Accepted Solutions
ngl
Contributor II
Contributor II
Author

Hi, it works when I change all the if conditions to set analysis. However, I'm not sure where the duplication from or why set analysis works. So if anyone has the same issue and has more time to research, please look at the Section Access approach recommended by Marcus.

View solution in original post

6 Replies
marcus_sommer

I think the data-model and/or the data-set isn't suitable for your wanted approach respectively the associations between the tables doesn't be fit.

Beside this you should switch the authorization itself to the data-model because everything in the UI is only a kind of an usability but not suitable for highly confidential data. More background to the matter is here: Section Access - Qlik Community - 1493681

Or
MVP
MVP

I am 100% with @marcus_sommer  on this. Front-end security is just security through obscurity. Users with a modicum of know-how can generally get around it.

ngl
Contributor II
Contributor II
Author

Thank you, @marcus_sommer and @Or. This is a great idea. I was not aware of this approach until now. However, modifying the data model will take time. Do you have any suggestions for a quick fix?

marcus_sommer

A quick fix is rather unlikely. There are various reasons possible for duplications. The simplest (and officially recommended) way is to use a star-scheme data-model by replacing any joins with mappings to extend the facts.

Independent from it is section access which needs time to comprehend the logic and syntax as well as the necessary measurements. Most important is to have enough BACKUP's without a section access and to start the learning and development not with an origin application else a small dummy and adding step by step the needed complexity.

These efforts are no waste of time else a very worthful investment in your knowledge which will later save much more time.

ngl
Contributor II
Contributor II
Author

Hi, it works when I change all the if conditions to set analysis. However, I'm not sure where the duplication from or why set analysis works. So if anyone has the same issue and has more time to research, please look at the Section Access approach recommended by Marcus.

marcus_sommer

An if-condition is evaluated on a row-level and a set analysis against a column-level respectively be working like a selection. Depending on the specific view against the existing data-model and data-set there may be various differences - especially in regard to the NULL handling.

IMO it's more an coincident that your view returned now the expected results as that the if-loop were the wrong approach and the set analysis the right one. A duplication of data hints very clear to real existing duplicates in the raw-data or afterwards created by any join or to a not suitable table-association (missing key-values on any side and/or not reflecting the needed granularity and causing cartesian results).

Charts are usually not a good way to detect the causes behind unexpected results else better are table-boxes with the relevant fields (those from the object-dimensions/expressions) and then reducing the sub-set of data with n selections to those records which are underlying of the unexpected results. Often are the issues then quite obvious - if not it could mean that the table-box missed an unique key-field - if none exists in the data-set they could be created per recno() and/or rowno() - maybe within each related table - and then be added to the table-box.