Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Qlik and ServiceNow Partner to Bring Trusted Enterprise Context into AI-Powered Workflows. Learn More!
cancel
Showing results for 
Search instead for 
Did you mean: 
Anonymous
Not applicable

How can I append trailer record to output datafile

I am able to create my output file and I need few columns out of output file to be aggregated and send it as trailer record. 

 

I am using aggregation of those few columns and appending to existing output file where I am getting errors for rest all the columns which I have already populated while creating data file.

 

Please advise.

Labels (3)
4 Replies
Anonymous
Not applicable
Author

An easy way to do this is to load your data as normal, add a tAggregateRow after the file output, carry out the summing and store the result in a tHashOutput. Then in the next subjob, read from a tHashInput and write to the file using another file output component. Set the 2nd file output component to append and the result will be added to the bottom. As a quick demo, the layout will look similar to this....0683p000009Lx2V.png

In the above demo I am generating random data using the tRowGenerator, loading that to a csv file, aggregating a numeric column and storing the data in the tHashOutput. Then below that the data is being read and simply added to the file at the end.

Anonymous
Not applicable
Author

Still getting the same error for rest of the columns which I have populated in data file but doesn’t want in trailer record like below –

 

Detail Message: CM_ADDRESS2 cannot be resolved or is not a field

 

Same error for rest of all data fields.

manodwhb
Champion II
Champion II

@satishinfa,can you show your job desing and on which component ,are you getting this error?

Anonymous
Not applicable
Author

You need to create a different schema from the point of the tAggregateRow component. Look at the output schema and ONLY output the columns you want. Then, if the aggregate is over the whole data set, do not add a group-by column and just use the appropriate function(s) for the output columns. Then your tHash component will have one row with a the appropriate schema. After that it is just a case of connecting to the file out in a different subjob (as I have demintrated above).