Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi,
When trying to write output files in Talend 7.3 BigData Spark Batch jobs using tFileOutputDelimited with delimiter "Ç" and encoding ISO-8859-15, the output file is generated with delimiter as Ç and Ç is getting appended/without Ç appended
at the end, instead of in-between the columns, resulting in all columns getting read as a single string column.
Please help in finding a solution
Thanks
Jishnu
Hello,
Could you please elaborate your case with an example with input and expected output values?
Thanks for it.
Best regards
Sabrina
Example input: xyzÇxyzÇxyzÇxyz
After removing fourth columns from input it is saved into file and read again and the output is displayed with tlogrow
Expected output:
| tLogRow_1 |
|=------------+----------+---------=|
|newColumn |newColumn1|newColumn2|
|=------------+----------+---------=|
|xyz | xyz | xyz |
'-------------+----------+----------'
Actual output :
| tLogRow_1 |
|=------------+----------+---------=|
|newColumn |newColumn1|newColumn2|
|=------------+----------+---------=|
|xyzÇxyzÇxyz| | |
'-------------+----------+----------'
Hi @Jishnu K K , from wich source come the first input ? Database file etc ?
See if you can check the encoding of the original source to give us more informations ?
Send me love and kudos
file with ISO-8859-15 encoding
first you can add -Dfile.encoding="ISO-8859-15" in your jvm settings. Run tab --> advanced settings --> specific jvm parameters,
Then you can check that all reading and writing are in ISO-8859-15 in your components
cause Ç is resulting of just a bad encodage of ç