Skip to main content
Announcements
Accelerate Your Success: Fuel your data and AI journey with the right services, delivered by our experts. Learn More
cancel
Showing results for 
Search instead for 
Did you mean: 
MarcoLuk
Contributor
Contributor

Decimal Multiplication with set expression gives 0

I am currently working on a table with some calculations on qliksense.

Imagine I have columns a, b and c, where c = a * b. I want to do a YoY comparison and the overall formula will be a1*b1 - a2*b2 = c1 - c2. I have a [Calendar Year] selection to set the year to be compared and I used set expression to get the data one year before the [Calendar Year] (ie: [Calendar Year]-1). However, it is found that using set expression to get the previous year data will cause precision problem on multiplication. The below is the example.

For example, I want to compare 2024 [Calendar Year] with 2023,

the 2023 data using set expression gives 0 directly as below.

MarcoLuk_0-1727159623755.png

Yet, if I set the [Calendar Year] to 2023, the number c is calculated correctly.

MarcoLuk_1-1727159684002.png

It is not quite a problem on decimal precision, can anyone offer some help on this issue, thank you.

----------------------------

a = table measures1, b = table measures2, c = (table measures 1) / (table measures 2).

 

Labels (3)
9 Replies
BernardBernoulli
Contributor III
Contributor III

Hello,

could you provide the set expression you wrote?

MarcoLuk
Contributor
Contributor
Author

Thank you for the reply. The below is the expression for the number c for previous year, [Calendar Year] - 1.

 

sum(aggr(
    // Start of Number a calculation
    alt(
        alt(
            sum(
                {<$(vSetAnalysisCompareUnit),[SlnGp]={'X','Y'}>+<$(vSetAnalysisCompareCustom),[SlnGp]={'X','Y'}>}
	            alt([Rev], 0) * alt([Rate], 0)
            )
            /
            sum(
	            {<$(vSetAnalysisCompareUnit),[SlnGp]={'X','Y'}>+<$(vSetAnalysisCompareCustom),[SlnGp]={'X','Y'}>}
	            alt([Wgt], 0)
            ), 0
        )
        /
        alt(
            sum(
            	{<$(vSetAnalysisCompareUnit),[Solution]={'Z'}, [SlnCat]= >+<$(vSetAnalysisCompareCustom),[Solution]={'Z'}, [SlnCat]= >}
	            alt([Rev], 0) * alt([Rate], 0)
            )
            /
            sum(
	            {<$(vSetAnalysisCompareUnit),[Solution]={'Z'}, [SlnCat]= >+<$(vSetAnalysisCompareCustom),[Solution]={'Z'}, [SlnCat]= >}
	            alt([Wgt], 0)
            ), 0
        ), 0
    )
    // End of Number a calculation

    *

    // Start of Number b calculation
    sum(
	    {<$(vSetAnalysisCompareUnit),[SlnGp]={'X','Y'} >+<$(vSetAnalysisCompareCustom),[SlnGp]={'X','Y'}>}
	    alt([Wgt], 0)
    )
    /
    sum(TOTAL
    	{<$(vSetAnalysisCompareUnit),[SlnGp]={'X','Y'} >+<$(vSetAnalysisCompareCustom),[SlnGp]={'X','Y'}>}
	    alt([Wgt], 0)
    )
    // End of Number b calculation
    , [Lane]
))

 

 

 

 

marcus_sommer

All expression-parts may need a set statement to adjust/overwrite the selection state. This means your expression may then look like:

sum({ set } aggr({ set } sum({ set } Field), Dim))

MarcoLuk
Contributor
Contributor
Author

@marcus_sommer Thank you for the response. Is there a way to solve the precision problem without set statement because I lack the access to modify the data load editor?

marcus_sommer

There is no precision problem. The ZERO result is caused from the selection state which you had above proved with the right result by changing the selection.

IMO your expression-approach touched the limits what is suitable and sensible in regard to develop and maintain a logic and to provide a practical usability. There are no real technically restrictions but all essential complexity belonged into the data-model and shouldn't be done within the UI. Therefore I suggest to consider a re-design of the data-model by using a star-scheme with a single fact-table which contains all measure.fields as well as wanted calculations and any kind of NULL handling and data-quality measurements.

It means not mandatory to pre-aggregate/calculate the final results already within the data-model else simplifying the UI that you may able to use approaches like:

sum({ set } aggr({ set } sum({ set } Field), Dim))

Further helpful is to transfer most of the period-logic (YTD, MTD, YoY, accumulations, ...) into a master-calendar and/or as-of-tables.

Beside of this I would simplify the testing by using just: sum({ Set } Field) without any multiplication/division as well as without the alt()  and aggr() stuff - to check the logic and working of the results against the selection state the simplified method is enough - and afterwards the working parts are combined again.

MarcoLuk
Contributor
Contributor
Author

Thank you for the reply again. Very constructive and I will ask for access to avoid doing the long calculation on the UI.

The selection state is proven correct because other fields with similar YoY comparison works fine.

I might clarify my question as it is a bit drafted and vague.

I have 2 scenario being tested,
1) Current year  [Calendar Year], where [Calendar Year] is adjustable so as the current year.

2) Last year [Calendar Year -1]

The formula a * b = c works properly when I set the [Calendar Year] to 2023 that gives

MarcoLuk_0-1727248297283.png

However, when I set the [Calendar Year] to 2024, the YoY script will calculate the 2023 data. It can calculate 'number a' and 'number b' correctly but some of the 'number c' cannot be calculated and 0 is displayed directly in the table.

MarcoLuk_1-1727248597417.png

I did several checking on Null values and selection state but proven that the problem was not caused by them. I might try unload the complexity of the script now. Thank you.

 

marcus_sommer

Before going further I would check the real existing data with their associations by using a table-box with all relevant fields. This would be in this case at least a + b + [Calendar Year] + the further dimensions which are used within the charts and as selections for this view. Depending on your data-set you may need more granular information within the table-box, for example the Date and/or the (Order or similar) ID and/or even a recno() + rowno() from the table-loads.

Then selecting the various specific use-cases should show if all data and their associations are properly existing.

I assume that there are some not correct respectively not being like expected and being the starting point to improve the data-qualiy or to adjust the data-model and/or to change the object/expression-design. In your case I could imagine that the dimensional context of the object and/or the aggr() isn't suitable and that any parts run against ZERO or NULL and forcing then the entire result to it.

MarcoLuk
Contributor
Contributor
Author

Thank you for your reply again. I may freeze this topic for a while as I want to adopt cleaner script and make use of set statements.


---------------------------------------------------


The ZERO and NULL conditions are not the cause of the 0s occurred in the calculation as I will treat them as 0 in the summation. The aggr() is only used in the last step for the summation of all "number c" and the $<>+$<> is checked error-free as it is used by other components and works well. 

This is when the [Calendar Year] is 2020 for example.

MarcoLuk_1-1727251841564.png


When I set the [Calendar Year] to 2021, the data get by the script of calculating last year's record.

MarcoLuk_2-1727251975076.png

 

The tiny difference is caused by the 0s like the mentioned and as below.

1) When the [Calendar Year] = 2020

MarcoLuk_1-1727255244028.png

2) When the [Calendar Year] = 2021, by the last year script with $<>+$<>, it gives 0

MarcoLuk_0-1727255194600.png

 

marcus_sommer

Both above mentioned reasons (not properly addressed selection state and/or issues with the data-quality respectively an unsuitable dimensional context) seems to me the most likely causes.

Beside the various measurements to simplify and/or to adjust the data-model / UI object you may also play a bit trial & error by replacing the alt() defaults of ZERO step by step with real numbers, like 1, 10, 100 and/or negative ones and/or whatever would produce an uniquely to identifying numbers to detect the parts which may not return a native result.