Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
We are using Attunity Replicate version 4.0.9.100 to copy data from Oracle (11.2.0.4) to MySQL (5.7.17).
Now, we plan to upgrade to Qlik Replicate 2023.11, which supports MySQL 5.7.
In the current version, we use this setting:
Task Settings → Error Handling → Apply Conflicts → Duplicate key when applying INSERT
→ UPDATE the existing target record
There is a difference between the two versions when a duplicate key happens during INSERT (using batch optimized apply):
Version 4.0.9: Performs an UPDATE
Version 2023.11: Performs a DELETE and INSERT
Is there a way to make version 2023.11 do UPDATE like the old version?
This is important because some tables in MySQL have fewer primary key columns than the source (for example, source has 2 PK columns, target has 1).
We want to keep using UPDATE like before.
Thank you!
Thank you so much for your notable support! @hitlbs70
Hello @hitlbs70 ,
Thank you so much for opening the article!
I'm really excited to see that Replicate 4.0.9 is still running smoothly! Version 4.0 has proven to be one of the most stable major releases in Replicate's history — happy replicating! 😊
Regarding the behavior change you mentioned — you're absolutely right. To improve performance, the current major version of Replicate no longer checks for constraint violations. Instead, it directly applies a DELETE
followed by an INSERT
operation.
Please allow me some time to further investigate this behavior in detail. I’ll get back to you with more information shortly.
Thank you again for your continued support — it's always greatly appreciated!
Best Regards,
John.
Hello @hitlbs70 ,
Based on my analysis, in current major versions, when Change Processing Mode is set to Batch Optimized Apply, Qlik Replicate performs a DELETE
followed by an INSERT
without detecting for primary key violations.
In contrast, when the mode is set to Transactional Apply, Qlik Replicate does check for primary key violations. If a violation is detected, it will instead perform an UPDATE
operation.
However, please keep in mind that Transactional Apply generally offers lower performance compared to Batch Optimized Apply.
Feel free to let us know if you need any additional assistance.
John.
Hello. @john_wang
Thank you for your quick and clear explanation.
Our customer is worried about performance issues when using Transactional Apply mode.
Is there any way to make Qlik Replicate do an UPDATE instead of DELETE/INSERT while still using Batch mode?
Thank you.
Hello @hitlbs70 ,
I don't believe so, and it appears there are no parameters available to control this behavior.
Could you please open a support ticket? We'd like to confirm this with our R&D team.
Bottom line: there is currently no way to influence this behavior in the existing versions. Therefore, we recommend submitting a feature request.
thanks,
John.
Hi @hitlbs70
You can open a feature request directly with our Product Managers here: https://community.qlik.com/t5/Ideas/idb-p/qlik-ideas
Thanks,
Dana
@hitlbs70For sure, you can submit an improvement request as per @Dana_Baldwin
But it is not going to happen, or at least not in general.
Now if the target supports a 'MERGE' function, or whatever target provided method to flip an insert of already existing record to update semantics, then there is some hope to get that for specific target.
Oracle has aa MERGE to do something like that, but best i can tell the MySQL MERGE is "INSERT ... ON DUPLICATE KEY UPDATE" ( https://www.geeksforgeeks.org/upsert-in-mysql )
Anyway, If you want updates now, then you have to drop the "Task Settings → Error Handling → Apply Conflicts → Duplicate key when applying INSERT → UPDATE the existing target record"
Replicate has been there - done that - using updats. The original batch apply code actually did the update. But then when Replicate found out that a row to update did not exist, then it had to first stumble in a whole batch of insert failing because 1 (or more ) failed. It then switched to one-by-one mode to find out which one failed (I made an engineering request once to do a binary search for that ;-).
The result is that you effectively ended up running in transactional mode anyway all too often. Performance for Tx mode will perhaps be acceptable for an OLTP target like Oracle or SQL server, but it will be abysmal for BI targets like Snowflake.
>> Our customer is worried about performance issues when using Transactional Apply mode.
Rightly so.
Hein.
Hello. @john_wang @Dana_Baldwin @Heinvandenheuvel
We will discuss with the customer and create a separate task in transaction mode only for tables with different primary keys.
If update handling is needed for duplicate data in batch mode with the current version, we will follow your suggestion and request the feature separately.
Thank you for your quick response.
Best regards.
Thank you so much for your notable support! @hitlbs70