There are two major methods:
1. Cluster the input data sets by timestamps or IDs and start multiple (parallel) jobs for those closed value ranges.
2. Do not cluster the data sets and use the new feature of Talend called parallelisation (only available in the enterprise edition).
I prefer the first method because this way you can clearly estimate how long it will take to get everything and if one data cluster fails all others are probably not affected. The failed one could be repeated.
For method one it is a good design to have a kind of plan-table where you insert datasets describing the input data set cluster. The actual transfer jobs read the plan table and start with the processing for one of its entry.