Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi ,
I need to parse and load DB a pipe delimited file having data in the below structure using Talend.
Col1|Col2|Col3
"abc|111"|100|"zzz"
"xyz|222"|200|"yyy"
I am using dynamic schema as there are are multiple files with different schemas.(but all files follow the above rules).
Issue : It is note able to parse the entire content inside double quotes (abc|111) as single field.
Please help.
Thanks
Maybe something like this?
Load Col1,Col2,Col3
From file.txt (unicode, txt, delimiter is '|', msq, embedded labels)
Update: please disregard my suggestion, it is a Qlik Sense answer. I didn't notice that "Design and Development" was a sub category within the Talend section of the community until now.
Hello
Check the 'CSV Option' checkbox if you are using tFileInputDelimited component to read the files.
Hi ,
Thanks for the reply.
I tried the same but no difference in the output of the two tlogrow components.
Can you please share the schema that you have defined inside tExtractDynamicFields component.
Otherwise, I cannot find any difference between your mapping and mine.
From your screenshot, I can see ABC|111 is read as a single field, this is desired result. tExtractDynamicFields is used to extract fields from Dynamic schema.
Do I need to create a static schema in tExtractdynamicfields component ?
If Yes , then issue is the schema will be different for multiple files.
Below is the schema of textractdynamicfields .
I just used tExctractDynamicSchema to extract fields to show you the data is read correctly. You should use the Dynamic schema if schemas are different for multiple files in your case, and you don't need a tExctractDynamicSchema component. What is your target app?
Hi,
I need the output like the below screenshot.
Is this possible without providing a static schema in textractDynamicFields ?
Thanks
tLogRow is used to print the data on the console, there is only one column with Dynamic schema, the data printed on the console looks like:
If you output the data into files or databases, the data will be split into multiple fields.
Col1;Col2;Col3
abc|111;100;zzz
xyz|222;200;yyy