Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi support,
One of our clients has a massive 24 billion row table and has configured parallel load correctly They are seeing 9 billion duplicate records in the target. They're using data segments and all their datatypes and settings look fine.
What's odd is that Replicate doesn't show any errors in the logs. Do you know why this could happen or has it occurred before?
Also they're on a very old version of Replicate (v6.4) Could that be true cause of this issue?
Thanks,
Mohammed
Hi @MoeE ,
I suggest to turn on VERBOSE logging on SOURCE_UNLOAD to check how Replicate sends out the SQL statement. 24 billion row table is too big, I would suggest to test that with a smaller table.
About no error for duplicate records, Replicate report errors based on the information returned by the database. If your target table does not have primary keys or unique index, or your target databases like SnowFlake do not enforce primary keys, Replicate will not report an error.
Please be aware that Replicate v6.4 has reached its end of support.
Regards,
Desmond
tracing would not help since client on End of Life product.
best is to plan for upgrade to current 2022.11 and see if you still have issue.