In trying to evaluate the option to use Extend Insert to boost performance I've tried different setting Number of Rows per Insert. The test hold a 250k rows and 7 columns in a MySQL, which is not much. And I'm fetching it and inserting it into a Redshift DB.
When changing the Extend Insert's Number of Rows to 5000 rows it breaks when the first 5000 rows are fetched from the source and gives me:
"Exception in component tRedshiftOutput_1
org.postgresql.util.PSQLException: An I/O error occured while sending to the backend.
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:283)
at..."
I'm of course looking for much higher Number of Rows to evaluate if tRedshiftOutput is even usable.
Any suggestions on what's causing this error?
/Geuder
PS. TOS BD 5.3.1 running on Windows
Hi geuder,
It looks like there is some issue at the DB end processing so much data at a time may be some configuration issue or may be some network bandwidth problem. Also check if there is some restriction on data transmission over certain limit(Firewall issue).
Thanks you.
I'm currently running this job locally and on a "vacation-grade" network, so that could be the reason.
I'll run the test again when I'm on a sturdier network.
/Geuder
Hi,
ran the job on a broader bandwidth and got the same crash at 5000 rows per insert.
So that leaves, I guess:
- configuration issue
- data transmission restrictions/limits
Any suggestions on where to look or test these out?
/Geuder
The server configuration parameter setting statement_timeout
(Aborts any statement that takes over the specified number of milliseconds)
is set to 0 (meaning: turns off limitation)
So it's not that making the:
"Exception in component tRedshiftOutput_1
org.postgresql.util.PSQLException: An I/O error occured while sending to the backend.
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:283)
at..."