30 Replies Latest reply: May 17, 2011 7:42 PM by Michael Terenzi

# Differing Answers with and without SUM

Hi,

Although an expert with Excel, I am relatively new to the Qlikview community - but loving the power and speed of the application. However, I am having some strange results that I do not understand . . . and they are throwing the dashboard way off course.

I am selecting several records and based on the number and the volume of a product, I can calculate the volume of product at a particular location - and there could be several products at the same location. But the results are not as I expected - and potentially point to me not understanding the use of SUM exactly.

If I use the following formula: Sum([Stock_Ledger_Detail QTYS]*[Product_Details VOLUME]) I get the incorrect answers for the product volume:

I exported the above data to Excel and manually multiplied the Stock Ledger Quantities by the Product Volume to show the right answers:

 Location_Details Data Source Site_Details SITE_CODE Location_Details LOC_STAT Location_Details LOC_CODE Stock_Ledger_Detail PROD_CODE Stock_Ledger_Detail QTYS Product_Details VOLUME Available Volume Product Volume Correct Answer Factor Bigger GACware (Unite) B2A Mixed VF01220 273988 28 0.006084 1.1000 0.3407 0.170352 2 GACware (Unite) B2A Mixed VF01220 273988 30 0.006084 1.1000 0.7301 0.18252 4 GACware (Unite) B2A Mixed VF01220 273988 40 0.006084 1.1000 0.4867 0.24336 2 GACware (Unite) B2A Mixed Total 1.1000 1.5575 0.596232 2.612245

Using SUM, the answers calculated by Qlikview are a factor of either 2 or 4 larger?

If I use the same as above, but drop the SUM portion, then the product volumes are correct, but I lose the sub-totals:

Can someone tell me why SUM behaves like this? And yet the Available volume calculation, also using SUM, is smart enough to know that there is a single location and that the volume is not the sum of the three records, but is 1.1CBM?

I am lost and need some enlightenment . . . please.

Thanks
Ian

• ###### Differing Answers with and without SUM

Mmmm, more and more confusing . . . but I had a small pie chart that calculated the used volume and although by removing SUM from my equation, the calculated volume per line was corrected, the total of teh three lines still showed the incorrect 1.56CBM.

So somehow Qlikview's summing process is either being misused by me or I do not understand its application.

The correct sum for the shown location is as shown in Excel: 0.596232CBM. But in both cases when the location is selected, the chart is wrong . . .

Aaaargh, silly me - okay worked this one out - the chart is using formula with SUM to total the location and I was only changing the expression in the table. Doh! Blonde post . . . but decided to post anyway to show I am at least thinking through this problem :-)

Appreciate any help out there.

Regards
Ian

• ###### Differing Answers with and without SUM

And now . . . when I go and remove the SUM from the formula on the pie chart, I get the message that "there is no data to display".

So clearly I need some expert guidance . . . and at the moment it is not available in Dubai . . . the support engineer is in India getting married :-)

Thanks
Ian

• ###### AW:Re: Differing Answers with and without SUM

Hi, Ian,

can you upload an example? That helps us to find the problem! Thanks

• ###### AW:Re: Differing Answers with and without SUM

Hello Ian,

best wishes to your support engineer. I suppose he is luckier then you (at least for the moment).

To give you the needed guidance the best would be to have a litle exam app. , perhaps with anonymized (what a word) data and a textbox with your demands.

Regards, Roland

• ###### AW:Re: Differing Answers with and without SUM

Uum, not quite sure how to upload an example . . . I tried by cutting and pasting the image outputs from the Pivot table.

Let me know what I should do to upload an example . . . the dataset is large . . . it is currently 9MB with hundreds of thousands of records . . . and clearly I am in the learning phase.

The data I am struggling with is sucked out of a Sybase DB from about 4 different tables. I have other charts using about 40,000 lines from a single Excel worksheet working just fine.

I am prettu sure it is just me not knowing Qlikview well enough!

• ###### AW:Re: AW:Re: Differing Answers with and without SUM

Hi, Ian,

there is an option at File to save only selected data, so you can first select some data, than save this document. If you want, there is also a menupoint, to scrambling data at document properties.

And then here in the forum look at Option, to upload the small file

• ###### SV:AW:Re: AW:Re: Differing Answers with and without SUM

1. Save a new copy of your file to not make your own work corupted

2. Make suitable selections to exemplify your problem, and to minimize the data/file size

3. Reduce the data; File > Reduce Data > Keep Possible values

4. Scramble dtat aif it is sensitive; Settings > Document Properties > Scrambling

5. Save the reduced and scrambled file.

6. Attach it to this thread

• ###### SV:AW:Re: AW:Re: Differing Answers with and without SUM

Okay, done. The interesting data is one the Location Metrics tab - currently I am not using SUM in the Location Volume by Site and Status pivot table - the calculated volume per line is now correct but the SUM is not working as can be seen by the pie chart. If the SUM expression is uncommented and the other formula is commented out, you will see the change that is confusing me.

You guys are brilliant - I look forward to learning exactly where my logic is flawed as I do not like illogical things . . . and this seems to be completely illogical.

I look forward to your response.

Thank
Ian

• ###### AW:Re: SV:AW:Re: AW:Re: Differing Answers with and without SUM

Hi, Ian,

at first, you have THREE Syntetic tables!!!! That's wrong, QV does not like them! Combine fields in two tables to create a new key, so that the syntetic tables disapear!

second: the product_Details VOLUME is always the same?

• ###### AW:Re: SV:AW:Re: AW:Re: Differing Answers with and without SUM

Hi Martina,

Okay, this is great feedback - please remember this is my first application I am building and I have had no training . . . so I am as newbie as anyone gets. :-(

How do I combine fields to create a new key? So for instance, \$Syn 1 Table is a combination of SITE_CODE and LOC_CODE as this is what guarantees uniqueness. Can you briefly explain how to get rid of this \$Syn 1 Tab? There were 7 synthetic tables when the consultant left, so I was pretty chuffed to tidy everything up and get it to 3 :-)

Secondly, what seems to change is if you select the properties for <Location Volume by Site and Status> pivot table and then select the second expression <Product Volume>. There should be two versions of the calculation, one using sum and one without using sum. The calculation should be the sum of all quantities of the possibly various products multiplied by their individual product volume to give a total volume. I get this with no sum, but then no subtotals and the pie chart stops working. If I use the version that sums everything, I get the incorrect field values inflated by 2x or 4x the expected value. But the pie chart works - even if it is with the wrong total?

The same thing is happening when the individual quantities are summed - the total is wrong? Should be 98 if you look at the pivot table, but the pie chart gives it as 128? Which looks like the value of 30 may have been counted twice.

As I said this is my first applicaqtion and the last time I was hands on with SQL and DBs was at university over 25 years back ;-) So gentleness for the old dog is requested.

Thanks
Ian

• ###### AW:Re: SV:AW:Re: AW:Re: Differing Answers with and without SUM

I changed the formula to include the SUM and then exported the pivot table to Excel. In the highlighted yellow portion on the right, I have manually calculated the expected volumes and the difference is hightlighted. Worksheet is attached.

Cheers
Ian

• ###### AW:Re: AW:Re: SV:AW:Re: AW:Re: Differing Answers with and without SUM

Hi,

your script could look like this:

Location_Details:
SITE_CODE & '/' & LOC_CODE As KeySideLocCode,
SITE_CODE As [Key Site_Details SITE_CODE],
LOC_CODE As [Key Location_Details LOC_CODE],
SITE_CODE As [Key Stock_Ledger_Detail SITE_CODE],
LOC_CODE As [Key Stock_Ledger_Detail LOC_CODE],
SITE_CODE As [Location_Details SITE_CODE],
LOC_CODE As [Location_Details LOC_CODE],
// A constant is imported in [Site_Details Data Source] to help troubleshoot if the same location is imported from different WMS's
'GACware (Unite)' As [Location_Details Data Source],
LOC_DESC As [Location_Details LOC_DESC],
If(LOC_STAT ='M','Mixed',If(LOC_STAT='E','Empty',If(LOC_STAT ='F','Full',If(LOC_STAT ='R','Reserved',If(LOC_STAT ='P','Pickslot',If(LOC_STAT ='B','Bad',If(LOC_STAT ='','Bulk'))))))) As [Location_Details LOC_STAT],
AISLE As [Location_Details AISLE],
COLUMN As [Location_Details COLUMN],
HEIGHT As [Location_Details HEIGHT];
SQL SELECT *
// FROM TESTDB.dbo.LOCATION WHERE LOC_STAT<>'B';
FROM GACWAREDB.dbo.LOCATION WHERE LOC_STAT<>'B';

DISCONNECT;

// ODBC CONNECT TO GACWARE_DRP (XUserId is TALLXUFMObaON, XPassword is IRcKDYFMTbcOXXJOETQA);
ODBC CONNECT TO GACWARE_LIVE (XUserId is RPRPXUFMObaAN, XPassword is aIfUNYFMTbcOXXJOEDZB);

Concatenate
//Location_Details:
SITE_CODE & '/' & LOC_CODE As KeySideLocCode,
SITE_CODE As [Key Site_Details SITE_CODE],
LOC_CODE As [Key Location_Details LOC_CODE],
SITE_CODE As [Key Stock_Ledger_Detail SITE_CODE],
LOC_CODE As [Key Stock_Ledger_Detail LOC_CODE],
SITE_CODE As [Location_Details SITE_CODE],
LOC_CODE As [Location_Details LOC_CODE],
// A constant is imported in [Site_Details Data Source] to help troubleshoot if the same location is imported from different WMS's
'GACware (Dubai)' As [Location_Details Data Source],
LOC_DESC As [Location_Details LOC_DESC],
If(LOC_STAT ='M','Mixed',If(LOC_STAT='E','Empty',If(LOC_STAT ='F','Full',If(LOC_STAT ='R','Reserved',If(LOC_STAT ='P','Pickslot',If(LOC_STAT ='B','Bad',If(LOC_STAT ='','Bulk'))))))) As [Location_Details LOC_STAT],
AISLE As [Location_Details AISLE],
COLUMN As [Location_Details COLUMN],
HEIGHT As [Location_Details HEIGHT];
SQL SELECT *
FROM GACWAREDB.dbo.LOCATION WHERE LOC_STAT<>'B';

DISCONNECT;

// ODBC CONNECT TO TEST_GACWARE (XUserId is NYfWZUFMObaCDYAGQZCB, XPassword is KOLMIYFMTbcOXXJOEDcB);
ODBC CONNECT TO GACWAREunite (XUserId is LFRLWUFMObaCDYAGQBOA, XPassword is aUdRdDNKVTcMWYYNHLMGHGFGGLZOXYEd);

Stock_Ledger_Detail:
SITE_CODE & '/' & LOC_CODE As KeySideLocCode,
// SITE_CODE As [Key Stock_Ledger_Detail SITE_CODE],
// LOC_CODE As [Key Stock_Ledger_Detail LOC_CODE],
PRIN_CODE As [Key Principal_Master PRIN_CODE],
PROD_CODE As [Key Product_Details PROD_CODE],
SITE_CODE As [Stock_Ledger_Detail SITE_CODE],
LOC_CODE As [Stock_Ledger_Detail LOC_CODE],
PRIN_CODE As [Stock_Ledger_Detail PRIN_CODE],
PROD_CODE As [Stock_Ledger_Detail PROD_CODE],
QTYS As [Stock_Ledger_Detail QTYS];
SQL SELECT *
// FROM TESTDB.dbo.STKLED;
FROM GACWAREDB.dbo.STKLED;

DISCONNECT;

// ODBC CONNECT TO GACWARE_DRP (XUserId is TALLXUFMObaON, XPassword is IRcKDYFMTbcOXXJOETQA);
ODBC CONNECT TO GACWARE_LIVE (XUserId is RPRPXUFMObaAN, XPassword is aIfUNYFMTbcOXXJOEDZB);

Concatenate
//Stock_Ledger_Detail:
SITE_CODE & '/' & LOC_CODE As KeySideLocCode,
// SITE_CODE As [Key Stock_Ledger_Detail SITE_CODE],
// LOC_CODE As [Key Stock_Ledger_Detail LOC_CODE],
PRIN_CODE As [Key Principal_Master PRIN_CODE],
PROD_CODE As [Key Product_Details PROD_CODE],
SITE_CODE As [Stock_Ledger_Detail SITE_CODE],
LOC_CODE As [Stock_Ledger_Detail LOC_CODE],
PRIN_CODE As [Stock_Ledger_Detail PRIN_CODE],
PROD_CODE As [Stock_Ledger_Detail PROD_CODE],
QTYS As [Stock_Ledger_Detail QTYS];
SQL SELECT *
FROM GACWAREDB.dbo.STKLED;

DISCONNECT;

• ###### AW:Re: AW:Re: SV:AW:Re: AW:Re: Differing Answers with and without SUM

The suggested change to the linking key removed the Synthetic table . . . however, the volumes are still out by factors of 2x and in some cases 4x? I am baffled and desperately need someone to point out what must be a common mistake I have made somewhere?

Come on QV community . . . someone has seen my mistake before?

Cheers
Ian

• ###### AW:Re: AW:Re: SV:AW:Re: AW:Re: Differing Answers with and without SUM

Hi

I made a count and you can see that you have 2 Product_Details VOLUME in every row and

in some row you have 2 Stock_Ledger_Detail QTYS.

Maybe that can help you to solve the problem with data structure.

Regards

Anders

• ###### AW:Re: AW:Re: AW:Re: SV:AW:Re: AW:Re: Differing Answers with and without SUM

Hi, Ian,

why do you have more data then before? Upload an example again!

• ###### AW:Re: AW:Re: AW:Re: SV:AW:Re: AW:Re: Differing Answers with and without SUM

Hi Martina / Anders,

The reason there is more data, was that prior to reducing it, I selected three locations as opposed to only one. The reason for this was that I wanted to give the community a little more data to look at and try and see some trends. I have reduced this to a dataset that includes 4 locations.

I am not sure if I have made progrress, but something that Anders pointed out may be a clue to why this is working as it is. The company is currently in the throws of an application migration. It is not a complete application change, but more a migration from a version of the product that was developed to specifically manage one location to a group version of the application that is used in all other locations around the world.

This means that for things like the the product details, including the product code and product volume, I could be importing the exact same record from each application. As generally, I am importing from each database (Dubai and Group version) and then merging in a QV table. This theoretically implies I could have two identical records in terms of PRIN_CODE and VOLUME . . .

. . .and having applied a constant which I read in when reading records from the Dubai database and a different one when reading in from the Group database . . . this is no longer theoretical . . . it is exactly what is happening. The company cannot purge the data at the application level, as migrated customers need to have their old transactional data maintained for several years in the old system for legal reasons.

This explains the double count for volume as these records exist in both systems - for entirely valid reasons. But so far I haven't worked out why the QTYS is counted as 2 in certain cases.

The migration project is going to be going on for several years I guess . . . so I need to work out a way around these two small hurdles?

One way that would help I think is the following: I read in a manually maintained Excel spreadsheet into a QV table called Principal_Master. This includes the principal code and the WMS (Warehouse Management System). If the principal has been migrated, their WMS will be shown as GACware (Unite) and if not it will be GACware (Dubai). So I know which principal is being managed by which system. However, how can I limit the import from the two databases based on the WMS value in this Principal_Master table as this is not available at the DB level? In other words, only import data related to GACware (Unite) if the PRIN_CODE has a correcsponding WMS system showing GACware (Unite)?

I am continuing to dig around and try and work out the best way of correctly modelling the data . . . will post an update here as soon as I discover anything else. In the meantime, any additional ideas? I have attached a reduced file so you can see the script and advise if you have any ideas.

Thanks
Ian

• ###### AW:Re: AW:Re: AW:Re: SV:AW:Re: AW:Re: Differing Answers with and without SUM

Okay All QV Community Superstars,

Progress at last . . . I took our small sample and reviewed it against the physical database tables and the data held in them . . . and the current QV worksheet is spot on. What is being modelled here is a warehouse with hundreds of thousands of locations and millions of potential products. Products are put away into locations and picked on behalf of our customers.

What eventually happens depening on what products you have picked, is that you get multiple lines in the database describing largely the same products. So when we had a line showing three (3) items left of a particular product in a particular location, it was quite feasible for there to be multiple lines of the same product but with different amounts left - which is what we had seen in the worksheet. Where COUNT was showing more than one for example a COUNT of 3 for a product count of say 19 - this was essentially describing THREE (3) lines in the database where of there was 19 item of the product remaining in the location.

So the counts and the product volume seem correct, although I have now exported a large pivot table out ofthe database for one of the stock controllers to do some data quality control work on.

Another issue is that they are in the midsts of a large customer migration, so over 40,000 locations are briefly live in both systems. This is temporary while they transfer stocks across from the old system to the new system - and this will be shut down as soon as the migration is completed.

Two reasonably simple questions:

1. Is there anyway in a pivot table to see a count of lines for any particular column?

2. Is there anyway to limit the display of data based on the results of an expression? For example when we counted QTYS, could have only displayed those that were greater than 1?

So very much appreciate your continued support.

Regards
Ian

• ###### AW:Re: AW:Re: AW:Re: SV:AW:Re: AW:Re: Differing Answers with and without SUM

People,

I also need to know if it is possible to to do the following:

I have an expression that calculates volume utilisation by essentially taking the volume of a product in a location and dividing it by the total available volume of the location and multiply it by 100. Effectively percentage utilisation.

When the pivot table calculates a subtotal, how do I tell it to sum the product volume in all locations and divide by the sum of available volume of all locations? Instead of summing the records above?

From this example spreadsheet, QV correctly calculates the 4.5cbm as it can see that the first two lines are for the same location and it does not double count them. The product count and the used volume are also correctly counted, but the Utilisation which should be 66.89 ((3.01/4.5)*100) is incorrectly calculated as the sum of the lines above to give 200.67.

Any ideas?

Thanks
Ian

• ###### AW:Re: AW:Re: AW:Re: AW:Re: SV:AW:Re: AW:Re: Differing Answers with and without SUM

Hi, Ian,

there is a very strange function: Dimensionality() for rows and SecondaryDimensionalty() for columns, that means, that it calculates a number, that shows you in wich partial sum level you are. Than you can enter an if-statement like:

If(Dimensionality()=0,Sum(Field)/Sum(Total Field),Sum(Field))

or so, for percentage you can enter at the tab number Percentage!

• ###### Differing Answers with and without SUM

I am a little confused.  Are you still having the problem with results that are either 2x or 4x what they should be when using the sum function?

What version of Qlikview are you using?

Do you use Publisher and see these incorrect results through the plugin, or from the desktop product?

I have support working on a case due to a similar problem that I am experiencing.  We are using V9 SR6, Publisher and the plugin.

We occasionally get 4x summed calculated values in an IF statement in a chart in our production environment when viewing the app from the accesspoint.  We get correct values when viewing the deployed app, at the same time, with the desktop product.

When this occurs, we stop and start the server service and then the accesspoint provides the correct result.  This problem is sporadic and sometimes we go 10 days before it occurs, and sometimes we go a few days.

• ###### SV:Re: SV:AW:Re: AW:Re: Differing Answers with and without SUM

The aim for the data model should be to have one key per table, to get simple links between tables.

Just as Martina exemplified, an easy way to resolve the synthetic key is to create a new key-field where the multiple keys are concatentated into one single value.

Looking at your model I would also recommend that you try to organise the tables to a more star like schema, instead of the chain you have now. Meaning that you have one central fact table to which you connect attribute tables. This will make Qlikview calculations quicker, and also easier for you as developer/Designer to interpret the data relations .

It is also highly encouraged that you take the time to look at QlikView's online training material as this will cover a lot of fundamental topics to become a QlikView pro. Working with scripts the Developer training is something that will benefit you the most.

http://qlik.com/training

• ###### SV:Re: SV:AW:Re: AW:Re: Differing Answers with and without SUM

Thanks Martina and Toni,

Okay, I can see how combining the keys will get rid of the synthetic key, but how do I link back to say the Principal table so I have access to the more user friendly Principal Name as opposed to just the Principle Code?

Secondly, the star data model sounds far more intuitive - I have struggled for several days just to get something like that linked correctly (or so I thought). Do you have any suggestions for the changes required to turn my chain into your star? :-)

Thirdly, regardless of the changes, can anyone see the differences that seem to show up in using SUM and not using it? Or do you think this is a consequence of the synthetic keys?

You cannot believe how helpful you people have been - I have done some of the online training . . . clearly a little classroom training is needed, but it is not offered in the UAE.

Regards
Ian

• ###### SV:Re: SV:AW:Re: AW:Re: Differing Answers with and without SUM

Okay,

Further to my last mail, I have done as suggested by Martina and appear to have got rid of the synthetic keys . . . not sure I have completely got my head around it just yet . . . but closer.

I saved a slightly larger dataset this time - 3 locations - I changed the expression to include the sum of all volumes used in the location. I then exported the pivot table to Excel - added tocolumns to show what the volume x quantity should be and highlight the difference.

I feel like I have progressed . . . even if it is still not yielding the correct answer. So . . . next steps to narrow down the problem?

Thanks peeps. Much appreciated.

Regards
Ian

• ###### SV:Re: SV:AW:Re: AW:Re: Differing Answers with and without SUM

And the Excel sheet as I seem to only be able to add one file and not multiple ones.

Thanks
Ian

• ###### SV:Re: SV:Re: SV:AW:Re: AW:Re: Differing Answers with and without SUM

I am quite convinced that the root cause of your problem is in the data structure. In this context you must keep in mind that QV will base it calculation work on a data set corresponding to the current selections. This data set is then the current combinations of data based on selection and the fields used in the object, so if you have one field in table A and an other in table D the relation is spread over table A-B-C-D. There is no problem to have this type of data relation in theory.

The problem begin in the scenario where your data does not have a logical connection over all tables. In these cases QV will start finding a way around and there by creating all possible combination of rows in the data set. Typical symptom is that your SUM operations result on larger numbers, as their are more rows in the data set. By using a Star schema you are unlikley to end up in this situation as your tables are closely related to each other. Then of course you can have several stars in your schema and tie them so together into a snow flake schema.

Unfortunately I do not have the possibility to dig deeper into your application, this is usually something directed to QlikView consultants in order to get help on the way forward with app development.

Find more details about consulting at qlik.com > Services > Consulting

• ###### SV:Re: SV:AW:Re: AW:Re: Differing Answers with and without SUM

Hi Ian

I seem to be having the same issue when it comes to averaging some data ,

Totally think its my data structured, did you get resolve for your query ?

• ###### SV:Re: SV:AW:Re: AW:Re: Differing Answers with and without SUM

Nope, not yet . . . I think that I will need to re-look at how I am linking everything together . . . clearly something is not right as mu answers even on simple datasets are out by factors of 2x and 4x???

I know there is a straight forward explanation somewhere . . . just wish someone would enlighten me :-)

• ###### Re: SV:Re: SV:AW:Re: AW:Re: Differing Answers with and without SUM

Toni Kautto wrote:

The aim for the data model should be to have one key per table, to get simple links between tables.

Just as Martina exemplified, an easy way to resolve the synthetic key is to create a new key-field where the multiple keys are concatentated into one single value.

Brenner.Martina wrote:

at first, you have THREE Syntetic tables!!!! That's wrong, QV does not like them! Combine fields in two tables to create a new key, so that the syntetic tables disapear!

(Edit:  Ack!  I didn't notice there were two pages and that I was responding to old posts.)

I haven't read the thread, so I don't know what sort of data structure Ian needs to solve his problem.  His synthetic keys may be completely inappropriate and causing problems.  He might well need to remove them.

But if so, replacing them 1:1 with manually-created composite keys is worse than useless.  Yes, I know the reference manual tells you to do this.  It's wrong.  Yes, I know your trainer taught you to do this.  Your trainer was wrong.  The synthetic key IS a composite key.  It already IS what you are replacing it with.  The only difference is that with a synthetic key, QlikView does all the hard work for you, saving you trouble, and allowing your load to go faster.  (Also, you cannot refer directly to the synthetic key table or ID - they're purely internal, but you typically don't need to refer to either, so this typically isn't a drawback.)

Why are synthetic keys considered so bad, then?  Because they often appear in bad data models, and this is usually how they are first encountered.  But it is the bad data model causing the problem, not the synthetic key.  And replacing the synthetic key 1:1 with a composite key does nothing to fix the data model, so whatever problems you have will persist.  If you have such problems, you have to fix the fundamental problem in the data model, not just replace a synthetic key with a composite key.

All the synthetic key does is link two or more tables by two or more fields.  There is nothing strictly wrong with linking two or more tables by two or more fields.  It is only a problem if that's a bad way to model your data.

There's more information in the below thread.  I'm linking to the last post, where I fixed the formatting on the first post that got badly corrupted on the move to the new forum.  Basically, start there, then go back to page 1, ignore the first post (it's unreadable now), and read from there.

http://community.qlik.com/message/116838

• ###### Differing Answers with and without SUM

Okay, I also had a pie chart that calculated the number of pieces in a particular location above my pivot table. The three records clearly show amounts of 28, 30 and 40. Which in my book should equal 98 . . . but the sum formula used in the pie chart:
Sum([Stock_Ledger_Detail QTYS]) shows 128 . . . which I would suspect is 28+(30+30)+40?

Why are the formulae all working so strangely . . . is something not linked correctly? I am now truly stumped.

Thank
Ian

• ###### Re: Differing Answers with and without SUM

Hi Ian,

I think what you are running into here is how you are handling duplicate records in a pivot table. For more information, open QlikView and hit F1, then search 'aggr'>Nested Aggregations and Related Issues.

I've modified the qvw just a little and created a Straight table which shows the duplicates and sums them correctly, and also a modified pivot table with some adjustments in the expression...I'm not sure what you are trying to do or how you want to sum across your dimensions, but I think this will set you in the right direction.

Cheers!