My initial guess is that your issue is related to the data set containing NULL values or data model contains synthetic keys, meaning that your chart object shows rows that do not really exist in your data. Your description sounds a bit strange though, so please attach a sample QVW to make evaluation of this a bit easier.
In case the app contains sensitive data, please use sfcrambling fucntion to make fields unreadable;
Settings > Document Settings > Scrambling
Also feel free to remove objects (company logos) that are not relevant to this problem, to make the app even more anonymous.
Toni, I don't have any NULL values on any of the key fields of the table in question or any synthetic keys in the data model.
Basically I have 3 tables, each with a inputfield.
This is what I'm doing on the script:
'0' as input_price1,
I then have a 3 pivot tables with an expression that will take the corresponding inputfield value if different from '0'.
The problem I'm having is only with one of the tables/inputfields.
I created a list box with the field input_price1 and after reload all values are '0' which is correct. So I save the document, close it and then when I open it again, the values change to 0,1,2,3,4....etc.. up to a bit more than the number of records.
How is that even possible?
I manage to find a temporary solution by storing into qvd the table that contains the inputfield with the issue, then dropping the table and then loading the table again from qvd later in the script.
It must be a problem with the links between the table or the pointers.
I have forwarded this to Support and will update later.
Reply from Support:
Thank you for bringing this problem to our attention. We have logged it
with our Issue Analysis team - ID 39541. If approved as a bug, the
issue will hopefully be fixed in the next Service Release.
I will advise you if there are any further developments in this regard.
Please accept our apologies for any inconvenience.
All I can do now is wait.
The paragraph below was taken from the QV10 SR3 release notes:
"Inputfields are not compatible with joins or resident loads.
The reason for this is that an inputfield must be read one time and one time only. Once
the inputfield is read it is added to an inputfield table and this table cannot be
tampered with. Joins affect the table and for that reason inputfields are disabled when
combined with joins. Resident works slightly different but it also affects the inputfield
table and it is disabled for the same reason."
I tried adding the input field only AFTER all joins or resident loads that use the table but that didn't work.
Input field was converted to a normal field so I don't get it.
We have determined in the Developer II training class for version 10 that SR3 is causing the problem with INPUTFIELD's. Several of the students had the same problem and other's did not after researching we found that the problem was only with SR3. The other students had SR1 and SR2. Hope this helps resolve this problem.
Another method might be to create an in-memory table with just a key field and the input fields using a 'from' load, then rely on the associated join to use the input fields as before.
It means creating an extra table but it would at least put an explicit table in your data model that shows what is going on with the input fields.
This seems to work in my test environment - I'd be interested to know if anyone else is using the inputfields in this way,
Yes, exactly. I'm using the same method as suggest recently by support.
RowNo() as InputKey
RowNo() as InputKey,
'0' as MyInputField // sets 0 as default value in my case
AUTOGENERATE peek('InputKey', -1, 'Data');