Skip to main content
Announcements
See what Drew Clarke has to say about the Qlik Talend Cloud launch! READ THE BLOG
cancel
Showing results for 
Search instead for 
Did you mean: 
Stravan
Contributor III
Contributor III

Upload fields depending on whether they have value

Hi everybody, need help with the following question:I have my script below where I want to upload fields Injuryposition1-6 depending on whether they have value. For example, if field 6 does not have data, I do not want to upload it, how should I do it in the script?

I have many tables that contain all the fields whether they have value or not, I do not want to show those who do not have value. I do not want to change table box to chart or stright table. If there is some way to be able to display fields in a table box if they have value would also solve my problem.

PERSON:
Load Distinct
Num(Accident_nr) as Person_OlycksID,
XSweref as Person_X_sweref,
YSweref as Person_Y_sweref,
Injuryposition1 as Person_Injuryposition1,
InjuryPosition2 as Person_Injuryposition2,
InjuryPosition3 as Person_Injuryposition3,
InjuryPosition4 as Person_Injuryposition4,
InjuryPosition5 as Person_Injuryposition5,
InjuryPosition6 as Person_Injuryposition6
FROM UOS.qvd (qvd);

Labels (2)
22 Replies
marcus_sommer

I am not sure if I understand your question but my mentioned loop-routine might need some adjustments so that only the fields from this table are dropped and not all those fields from the existing datamodel. This means:

for i = 1 to nooffields('table')
   let vField = fieldname($(i), 'table');
   if wildmatch('$(vField)', '*my fields*') then
      if fieldvaluecount('$(vField)' then
         trace $(vField) has values;
      else
         trace $(vField) has been removed;
        drop fields [$(vField)] from [table];
     end if
   end if
next

Another sometimes important point could be to change the order of the various load-statements - there might be differences if at first all tables are loaded and then any cleaning happens or if the cleaning is applied after each load.

Beside this I strongly suggest to consider the whole approach of keeping such large crosstables within the datamodel and using them (nearly) all within UI tables. Often are the efforts to handle them within datamodel and UI really enormous especially compared to classical stream-tables creates with a crosstable-statement. If you really need to export such large table-constructs then create them within the script and store them as csv-files.

If you want to remain by your logic you should use straight-tables with measures to be able to use scrollings, using conditions, formattings and so on but if performance is important won't be nothing faster as a tablebox.

- Marcus

Stravan
Contributor III
Contributor III
Author

Hello and thank you once again for the answer.
I meant that the code deletes the other field from the same table (I test on a table), it starts with the inserted field that does not have value Injuryposition_5 but then removes Injuryposition_7, Injuryposition_9, etc but not Injuryposition_6, Injuryposition_8, etc.
marcus_sommer

It's difficult to say what's different to your expectation without knowing the data and in which order which script-statements are executed.

Please check the used table-name and the field-name query within the routine. Further take a closer look in which order the statements are running. The fieldvaluecount() check within the routine does a check of fieldvalues for the field - not of a field from a certain table.
If you would need to check it within a table it would require to load the whole table and a checking with a count() - by more as 100 fields and a large amount of records such an approach would need a lot of resources - therefore the suggestion with fieldvaluecount() which just queried the symbol-tables. But it requires a certain load-order and/or additionally measures like qualifying the loads and renaming the fields again.

I think it would be very helpful respectively necessary that you traced the order of the various load-statements and their results by using the debugger and/or the script-log and/or various TRACE statements within your script. Without understanding what's happening there it would be quite hard to detect any mistakes.

I might repeating my suggestions from above but by transforming the crosstables into normal data-tables you could avoid many of these efforts ...

- Marcus