Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
We are getting a daily file in UTF-8 BOM encoding because of which our Talend ETL Job always misses the first row of the file
Sample Data in File:
P, 1234, $10
Q,1235,$20
R, 1236, $15
Our actual flow is like
tFileList ==>> tFileInputDelimited ==>> fReplicate ==> tFilterRow ==> tMSSqlSCD
Actually tFileInputDilimited is able to process all rows but when we use tFilterRow, but it always misses first row of every particular file
The condition for tFilterRow is column0 Equals "P"
When we configured tLogRow we found few special characters prefixed with the first rows of all files. Example ???P
Also when we opened our CSV files in Notepad++ we discovered that File is encoded in UTF-8-BOM
We have option only for UTF-8 in Advanced settings of tfiledilimited
Let us know how can we process UTF-8-BOM file using Talend job
Thanks & Regards
@Andries Can you please tell me what is the details of your local file you are trying to upload (size, number of rows / columns) ?
I also reproduce the error with a dataset (10k rows / 32 columns) but i can open it and view my data.
Can you open/view your dataset please ?
@Andries Can you please update the Info.plist file (in the /Applications/Talend Data Preparation Free Desktop.app/Contents folder or right click on the Talend Data Preparation icon and select view the contents of the package).
Then add the entry (like in the attached screenshot)
<string>-Dhystrix.command.default.execution.timeout.enabled=false</string>