Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
hello,
I recieve a .csv file from my client. In one of the fields(varchar), we get the data values along with comma such as:
Input data:
oje, ich wachse! (544069130)
Output expected data:
oje ich wachse! (544069130).
Currently the values are getting split into 2 fields as
Field1:
oje,
fields2:ich wachse! (544069130) which is not intended.
This is happening as its a comma delimited file. Can you please suggest how to resolve this issue. i tried multiple functions in tmap but its not working.
you can use the java function replaceAll(). In the tMap put something like
input_field.input_value.replaceAll(",", " ")
However this will error if the field input_field.input_value is null or an empty string so wrap it in a null check first like this
(!Relational.ISNULL(input_field.input_value) && !input_field.input_value.equals("")?input_field.input_value.replaceAll(",", " "):null)
The solution is to read the whole line at once and extract the fields later. Before extracting the fields you need - based on some rule - to replace the misleading comma.
It can be hard to define the rules which comma is a delimiter and which comma is part of the content. The by far best way is to improve the source of the data and try to use enclosures (like") to encapsulate the field content. Hopefully you have the chance to influence the creation of this csv file.
i got the solution to this issue. Thanks for your response guys. there is an option tFileinputdelimited component to skip the commas and it worked for me.
Ok, next time you should post an example of the whole line here. Obviously we had a complete wrong idea how your data looks like.