Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi !
I discovered while I was running a long job, that my MSSQL Server connector was commiting automatically, despite the fact that the "Auto-commit" is unchecked.
There's also no "Batch size" checked.
I was intrigued at first by the fact that my job ran at 120 line/s, while I could do an easy 300 to 500k per second with the same not so powerful laptop.
The job design is pretty basic :
Prejob => connection (auto-commit off)
Job => Data read from file or DB and output to the same connection (update/insert most of the time) and commit at the end
Post =>close connection
If I don't commit, it rollback my transaction.
But to other application (like Tableau Software, which I was using), the data is visible during the process. This is really annoying since it can mess up many analysis depending on whether we are doing some loading ...
Any insights ?
Hello,
Make sure then you "use an existing connection" in your tMSSQLOutput. Screenshots of your job design will be preferred. Please mask your sensitive data.
Best regards
Sabrina
Hi !
Here are important piece of my job :
Connection in Prejob :
Properties :
Target :
I'm updating on a nonclustered primary key, in an empty table, on a 24C/48T machine with 192GB of RAM ...
My colleague is telling me that it's due to MSSQL Server (I didn't encounter any problem on very small Postgre/Oracle/Mysql DBMS where I could easily go at 30k line/sec in UPSERT, even with millions of lines).
Postgre have this kind of issue if you trying to update a table with a foreign key, and no index on the key. It's runs @ 20 lines/sec without index and 200k with an index.
Hope it helps in your research.
Sincerely,