Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
We are facing unusual latency with Attunity tasks after migrating the source DB to Oracle PDB. This is happening for PDB from the azure west US region but Before migrating these source DB to PDB - we had an Oracle DB instance in the windows server in the west-us region and it was working fine. and Right now our Attunity replicate servers are in Azure East US region.
We are not facing a similar issue with PDB servers from the azure east-us region.. All of PDB serves are set to UTC timezone irrespective of their hosted region.
What could be the reason behind this issue?
How Attunity is calculating the time Zone and what is the role of the region in that?
In your original configuration the database had it's own REDO and ARCHIVE logs.
PDBs (Pluggable Database) share the REDO (and ARCHIVE) with all other PDBs in the CDB (multitenant container database) they are hosted in.
The typical (default) Replicate change source reader uses the "Replicate Log Reader" aka as B-file reader, possibly with tweaks such as ASM, direct file access, copying the file to a temp location. In that configuration Replicate has to read through the entire logs to looking for object ID's for tables of interest for the appropriate container. After the upgrade the taks scan through all changes from all PDBs which could be similar to before, or could be 10 times more work.
With that it is ever more important to locate the Replicate server close to the source (geographically) and to 'tune' the log reader settings, perhaps even going to the 'archive only' settings to minimize the read and try reading again loop. It seems you are already using Logstream. Great as that minimizes the log reading. Maybe you can go even more aggressive on that, having just one logstream reader for each PDB - no matter how many tasks/tables that stream has to support.
You could make a test with the LogMiner log reader, but that option is being deprecated. Using Logminer access the log data is 'filtered' by Oracle itself creating more load on the DB server, but potentially significantly reducing network traffic.
Please consider engaging Qlik Professional Services for further help.
For further help here, consider sharing the source endpoint json such that folks trying to help know exactly what is currently being requested.
Hope this helps some,
Hein.
Hello,
We have a couple of articles about performance tuning:
and
Also, if you want to know when to reach out to Professional Services or Support, please refer to this link:
Hope this helps!
Thanks
Lyka
In your original configuration the database had it's own REDO and ARCHIVE logs.
PDBs (Pluggable Database) share the REDO (and ARCHIVE) with all other PDBs in the CDB (multitenant container database) they are hosted in.
The typical (default) Replicate change source reader uses the "Replicate Log Reader" aka as B-file reader, possibly with tweaks such as ASM, direct file access, copying the file to a temp location. In that configuration Replicate has to read through the entire logs to looking for object ID's for tables of interest for the appropriate container. After the upgrade the taks scan through all changes from all PDBs which could be similar to before, or could be 10 times more work.
With that it is ever more important to locate the Replicate server close to the source (geographically) and to 'tune' the log reader settings, perhaps even going to the 'archive only' settings to minimize the read and try reading again loop. It seems you are already using Logstream. Great as that minimizes the log reading. Maybe you can go even more aggressive on that, having just one logstream reader for each PDB - no matter how many tasks/tables that stream has to support.
You could make a test with the LogMiner log reader, but that option is being deprecated. Using Logminer access the log data is 'filtered' by Oracle itself creating more load on the DB server, but potentially significantly reducing network traffic.
Please consider engaging Qlik Professional Services for further help.
For further help here, consider sharing the source endpoint json such that folks trying to help know exactly what is currently being requested.
Hope this helps some,
Hein.
Hello,
We have a couple of articles about performance tuning:
and
Also, if you want to know when to reach out to Professional Services or Support, please refer to this link:
Hope this helps!
Thanks
Lyka