Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi Talend experts
I am trying to create a job where I have got 20 columns in one table where I have written specific logic for updating deleted_flag which is designed in separate flow.
Now I am trying to get these records back into original table and source additional columns from table and stores them in table. How do I design my job?
Just to give a little background MergeintoHistory is where all my 20 columns are stored and for specific deleted_flag column I have written separate logic in below flow from componentsok. Now my end result is I should be able to source 19 additional columns from mergeintohistory and one column from below flow logic (ie deleted_flag) and get them all together in same table.
Hopefully this makes sense.
Help would be really appreciated!
Thanks
Harshal.
@Parikhharshal wrote:
I want to use table output as input. How do i do?
Thanks
Harshal.
HI Harshal
You already use tHashOutput/tHashInput
this is a answer for small (very relative terms) data-sets
as alternative You can use local (for Talend) csv file or database table
for example - You operate just with 10000 rows, all fine use memory hash, but if You compare 2 million of rows, it could be issues with memory, especial in multi jobs environment - use local csv file. it fast and not consume memory
regards, Vlad
do You have some technical issues?
or You search for architecture advises?
if architecture design:
not clean - how many rows could come from redshift and saleforce ?
it could be affected and for speed (execution time) and for memory consumption (tHash)
@Parikhharshal wrote:
I want to use table output as input. How do i do?
Thanks
Harshal.
HI Harshal
You already use tHashOutput/tHashInput
this is a answer for small (very relative terms) data-sets
as alternative You can use local (for Talend) csv file or database table
for example - You operate just with 10000 rows, all fine use memory hash, but if You compare 2 million of rows, it could be issues with memory, especial in multi jobs environment - use local csv file. it fast and not consume memory
regards, Vlad