
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
UTF-8 BOM Encoded File Processing
We are getting a daily file in UTF-8 BOM encoding because of which our Talend ETL Job always misses the first row of the file
Sample Data in File:
P, 1234, $10
Q,1235,$20
R, 1236, $15
Our actual flow is like
tFileList ==>> tFileInputDelimited ==>> fReplicate ==> tFilterRow ==> tMSSqlSCD
Actually tFileInputDilimited is able to process all rows but when we use tFilterRow, but it always misses first row of every particular file
The condition for tFilterRow is column0 Equals "P"
When we configured tLogRow we found few special characters prefixed with the first rows of all files. Example ???P
Also when we opened our CSV files in Notepad++ we discovered that File is encoded in UTF-8-BOM
We have option only for UTF-8 in Advanced settings of tfiledilimited
Let us know how can we process UTF-8-BOM file using Talend job
Thanks & Regards

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
So far, talend tfileinputdelimited component uses "UTF-8" without BOM. There is an option "Custom" in Encoding part.
Could you please try it to see if it works?
Best regards
Sabrina

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have tried encoding type - Custom - "UTF-BOM" but it didnt work.
I have even tried "UTF-8-BOM" even that didnt work.
Please provide a valuable solution.
Awaiting for your kind response

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
We are not able to process UTF-8 BOM file.When we run a job of 10 file,every time it skips the first row of every file.We are waiting for talend team to respond to our issue.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Talend uses "UTF-8" without BOM. A UTF-8 BOM encoded file contains a three-byte pattern (0xEF 0xBB 0xBF) in the prolog, that is probably not parsed successfully by the tFileInputDelimited component.
Have you already checked tChangFileEncoding component to see if it works?
Best regards
Sabrina

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi
tChangeFileEncoding changes "<U+FEFF>" in UTF-8-BOM into "?" in the first header of the file, which doesn't help. I need to remove first 4 characters . I need to use dynamic schema to load CSV file into DB, DB load component reads the header line to get the column name. Extra "<U+FEFF>" makes DB load component to fail. Any way to deal with this?
Thanks,
Bin

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Same problem here, nothing from Talend ? We need to deal with UTF8 XML with BOM.
