Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi,
I am using a one to one mapping for loading data into a MSsql server table.
This mapping is simple and contains only two components. tFileinputDelimited for reading and tJdBCoutput for inserting into table.
we are using a tJdbcoutput component to load data into table.
We have created the mapping in talend standard job as well as big data job.
Standard job is giving 10 minutes to load 20 lakh records.
But Bigdata job is giving more than one hour for inserting into table.
In standard commit interval is available. we have set it as 20000 and batch size as 50000
in Bigdata job the commit interval is not available but batch size is available.
Can you help to increase the performance of the bigdata job.
PFA mapping screen shot.
- do use same jdbc driver?
- do You have same network topology for standard and batch job?
as bypass - use tRunJob and then standard or Bulk (preferred) insert
Hi,
- I need to re-check if I am using the same jdbc driver.
- I don't understand , what is network topology. can you shed some light?
- This bypass we thought of , but our source file is in HDFS/AzureBlob. So if we have to use Bigdata job.
- I don't understand , what is network topology. can you shed some light?
JDBC very critical to network latency
so, in case if target server have different network latency from Talend - time could be different for network operations
for example - Classic Job You test in LAN, and batch - over WAN