Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 
datanibbler
Champion
Champion

Trouble synchronizing the figures displayed in different objects ...

Hi,

I have, in an app I am currently building, two objects:

- A straight_table chart and

- a gauge_chart

and I am trying to synchronize them in that they should display as exactly as possible the same figures.

Logical, isn't it?`Sounds easy, too - well, so one should think ...

It is not. The figures were even quite far apart. I have by now removed one calculated dimension from the chart that the gauge did not have and removed the tick "suppress NULL values" for another dimension and realized that the figures came closer.

To do that, I calculated a flag_field in one of my tables to tell whether or not a specific record in the table is linked to any in the other table. I used an EXISTS() function for that to check whether the keyfield used to link the two tables exists in a copy of the keyfield in the other table. That seems to work quite well.

Now, the figures are quite close - the chart shows a final ratio of 71.33 % and the gauge shows 70.2 % which isn't bad - but still not ideal if there is any way to fix it. That can't be rounding errors - well, several rounding errors could stack up to that difference ...

Can you tell me what that could be due to?

Thanks a lot!

Best regards,

DataNibbler

41 Replies
datanibbler
Champion
Champion
Author

Now I have replicated the whole thing in Excel - I have exported the entire table_chart and then thrown out all those line_items which have no valid usage_rate at all because they don't figure in our database (they might be used acc. to our database, but as we don't have them in our file, they don't have a capacity, so strictly speaking, that would be a division by zero which QlikView just displays as a > - <.

=> When I do that, the result is exactly the same that my table_chart displays - but the gauge shows a significantly

     different usage_rate.

=> So the relevant question now is, whatever enters in the calculation of the gauge_chart that does not enter into the calculation of the table?

P.S.: Another strange thing is also that, if I remember correctly, the last time round it was the gauge that showed the exact same figures as a "simple" recalculation in Excel, not the table_chart.

marcus_sommer

Hi DataNibbler,

in a normal chart with dimensions will be the $-sign expansion from the set analysis calculated once before the chart and then applied for each row respectively dimension-value. In this case you could try it without the $-sign expansion like:

< Art_Lager = {"=if(match(Art_Lager, 'nonexistent'), false(), true())"} >

but if this didn't worked you will need another approach. Differences between normal charts and gauges will be most often caused by the dimensions and you will probably need TOTAL's and/or aggr() within your expressions and sometimes also an error-handler to catch NULL or other unwanted values.

- Marcus

datanibbler
Champion
Champion
Author

Hi Marcus,

well, now I have removed all dimensions but one - the shelf_nr. from the database of shelves that we have built up.

With this dimension, the "Suppress NULL values" is ticked.

In the gauge_chart, I have a set_expression to essentially do the same - I have made a field in the script that is 1 if a shelf (as named in the real database) is listed in our new database and 0 otherwise - I have a set_expression querying this for the value 1.

=> This done, the sum of shelf-capacity that is displayed in my table_chart is equal to that used in my gauge_chart

      (I have replicated the formulas used in the gauge_chart in a textbox) and the nr. of tracks (boxes) on these

      shelves is also equal

<=> Still, the average of the usage_rate in the table_chart (the usage_rate is calculated in every single line of the

       table_chart) is markedly different from what the gauge displays (and the gauge simply displays what I could

       calculate myself by just dividing the nr._of_boxes by the sum_of_shelf_capacity) so I guess the gauge shows

       the correct value.

Still, any single item that I select in the table_chart is calculated correctly and then the figures match those in the gauge_chart.

=> Can it be that there are such large rounding errors? I will do some experimenting with just adding more decimal_places to the calculation in every single line and see if that makes the whole thing somewhat closer ...

marcus_sommer

Rounding errors aren't rather involved - they make usually only quite smalI differences and I think you will need aggr-functions maybe nested ones maybe with TOTAL's and/or a NODISDINCT extension:

avg(TOTAL aggr(avg(aggr( NODISTINCT ...

you might need to play (systematically) a little bit with them.

- Marcus

datanibbler
Champion
Champion
Author

Hi Marcus,

thanks for the help!

But - if there was something wrong with the dimensions (such that I would need to apply some aggr() functions or such with which I also don't have anything much in the way of experience) - would not that error necessarily show up in every single line, rather than just in the average (as I said, I can select any single item and the figures match exactly)?

(the rounding really wasn't it, expanding the nr. of decimals didn't bring the figures any closer)

I really cannot see anything in this whole business. It's a complete mystery to me.

What is also strange, there is now only one dimension left - the shelf_line from the database of shelves we are just building. I have ticked the "Suppress NULL" checkbox, so any boxes in our real database booked to a place that is not in our shelf_database are not displayed.

<=> In the gauge, I have that set_expression I have already posted, making use of a field that I calculate in the script.

      The field is like this (within a preceding LOAD):

        >> IF(EXISTS(Lagerreihe_vorhanden, Regalreihe), 1, 0) as Lagerreihe_in_Excel_DB_vorhanden <<

        (the field >> Lagerreihe_vorhanden << is in a table that was loaded earlier and the field >> Regalreihe << is the

         rather complex calculated key that I build in the actual LOAD - that is why this is in a preceding LOAD.

I will experiment a bit with this - maybe having this in a preceding LOAD is not right. Strangely, with this expression, the gauge shows the figure that one could calculate from the total SUM and COUNT - but when I remove it, it shows something completely different - again different from what the table_chart shows, so I have to deal with three different figures ...

If that fails, I will try making a fake database with only about 10 or 20 items so I can recalculate it all manually to check it.

Best regards,

DataNibbler

P.S.: I just realize there is an error in that field - quite a nr. of items that are actually present in our database are grey when I select the value >1< in that field. I have to look into why that happens. That should be quite unconnected with my other issue as I guess it affects both objects in the same way - but then, I cannot judge on this as it is completely misty to me ...

marcus_sommer

Hi DataNibbler,

if you struggled in this way in could be that the reason is caused through your datamodel (keys didn't fit) or within the data-quality. I would check this within a tablebox with a unique record-identifier (recno/rowno) and after that I would build a table-chart - at first with many details nearly on record-level and then consolidate them by reducing the details step by step respectively extend the expressions with totals to ignore the detail-levels.

This will need some efforts but it's better to understand the data right and most often faster then trial and error on data which might be different then expected.

- Marcus

datanibbler
Champion
Champion
Author

Hi Marcus,

yes, I guess I have to dive into the very basis of this app once again. As I said, the key is quite complex, so it might well be that there is some error - because in that new database we have a mix of shelf-rows - with many so-called BINs - and other areas with just one - and in the real database we have only the BINs - so the key has to be different depending on the type of storage_area.

For now, I have noticed that for those items that do not appear in the real database, that EXISTS() field does not work - selecting the value >1< in that field leaves that item grey even in the new database - so the gauge, because of the set_expression, does not consider those items at all while the table_chart - well, I see them appear with a value of 0, but it might be that I only see that and behind is something different that somehow messes up the figures.

datanibbler
Champion
Champion
Author

Hi Marcus,

I have now checked all my keys - it's a bit difficult because all the storage_areas which, acc. to the system, are not in use, do not figure in the real database - so even if they are in the Excel_DB, the key does not match anything - but most items match, I've just tried a few from every type of storage_area there is in my Excel_DB.

I'm still not much wiser - but I try to approach the issue step by step:

I've realized that the flag_field I have created with Applymap() to show whether a storage_area is in our Excel_database does not quite work (same as the field I used to have with EXISTS()): The other table is from our working database, and many storage_areas that are not in use (acc. to the system) do not figure in the table - so for all these, that flagfield will also say "No" because the Applymap() cannot return a positive value even when a storage_area does figure in our Excel_database because there is simply nothing there to match.

=> So a set_expression querying that flagfield for the value >1< will at the same time dismiss all storage_areas which, acc. to the system, are not being used.

=> The set_expression I have in the gauge does thus not have the same effect as the "Suppress NULL" checkbox in the table_chart.

Unless you know a better way, I guess I'll work with a trigger to just select all selectable values in the Excel_DB - that way I'll have all storage_areas listed there, used or not.

I will now kind of start over and shrink a copy of the Excel_DB down to few records and see what I see - since every single item seems to match (between the table and the gauge) and the discrepancy is just in the top_line_average, that error must come in somewhere.

marcus_sommer

Hi DataNibbler,

I could imagine that the approach to link these data is rather inappropriate like in the common case that you have sales and no targets to them and reverse like in this example: Re: Budget Achievement.‌ and how you it turned something will be missing. That means you need a different approach like to concatenate those data (and create there your flags) or using of a bridge-table but this is more complicated and these bridge needs all distinct values.

- Marcus

datanibbler
Champion
Champion
Author

Hi Marcus,

I'm fed up with this. I cannot see anything in this matter since every single item seems to be right - e.g., it matches between the table_chart and the gauge - but the average of them all is wrong - the table_chart and the gauge show different values.

Also, I cannot use either EXISTS() or Applymap() to determine whether a shelf or area is in our new database because, when it is not in use, it's just not in the transaction_table in our real database, so the output of both these functions will be the same in cases a) (it's used, but not in our database) and b) (it's not used, whether or not it's in our database).

I will start over again now I guess, with a new approach at building the link between our new database and the transaction_table. And I will try a bridge_table that has all the areas - I could use that to determine whether one is in our new database. And there is one, only it is very irregularly maintained.

Let's see ...