Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
See why IDC MarketScape names Qlik a 2025 Leader! Read more
cancel
Showing results for 
Search instead for 
Did you mean: 
Anonymous
Not applicable

Process based on source count data

Hi,

 

We are using Talend Data Integration 7.x. I have an issue with one of my requirement.

 

My pipeline consists of components tDBInput -> tMap -> tDBOutput.  I have a huge data coming in from source tDBInput(around 10 million) , so I had to filter the data for example (1 to 1,000,000 & 1,000,001 to 2,000,000). 

Can I process each filter sequentially such that it will not fail.

I tried to use row-->Iterate: unable to use row(iterate) from tDBInput to tMap. 

 

Please suggest me on the better approach, I cant run in parallel as the data is huge and it might affect the server capacity.

Labels (2)
4 Replies
Anonymous
Not applicable
Author

@xdshi  any suggestions ??

TRF
Champion II
Champion II

You may have tDBInput -> tFileOutputDelimited (with the number of lines for each files definied on tFileOutputDelimited Advanced settings).
This will create from 1 to n files, then you can iterate over these files and do what you want.
TRF
Champion II
Champion II

@srkalakonda, does this help?

If so, thank's to mark your case as solved.

Anonymous
Not applicable
Author

@TRF, We cannot use files as a intermediate area. Can we iterate based on parameter values ?