Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi All,
Usecase:
I have a data in coming form a file.
For example
RawDataA, A
RawDataB, B
RawDataC, C
.
.
RawDataZ, Z
Now I wanted to store "RawDataX" in corresponding X value location
/X/RawDataX
Note:
I don't want to create 26 tFileOutputDelimited in job
Is there any possible way where i can use single tFileOutputDelimited for all records
Heads up
In DI, we can use tFlowtoIterate and context variable in tFileOutputDelimited to generate above requirement
Can anyone give some ideas how to implement same thing in spark or map-reduce job ?
Hello,
So far, tflowtoIterate is available in Standard ETL only.
Here is a KB article about:https://community.talend.com/t5/Architecture-Best-Practices-and/Spark-Dynamic-Context/ta-p/33038.
Hope it will help.
Best regards
Sabrina