Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Join us in NYC Sept 4th for Qlik's AI Reality Tour! Register Now
cancel
Showing results for 
Search instead for 
Did you mean: 
yogeshramaswamy
Contributor
Contributor

latency with Attunity tasks

We are facing unusual latency with Attunity tasks after migrating the source DB to Oracle PDB. This is happening for PDB from the azure west US region but Before migrating these source DB to PDB - we had an Oracle DB instance in the windows server in the west-us region and it was working fine. and Right now our Attunity replicate servers are in Azure East US region.
We are not facing a similar issue with PDB servers from the azure east-us region.. All of PDB serves are set to UTC timezone irrespective of their hosted region.

What could be the reason behind this issue?
How Attunity is calculating the time Zone and what is the role of the region in that?

Labels (1)
2 Solutions

Accepted Solutions
Heinvandenheuvel
Specialist III
Specialist III

In your original configuration the database had it's own REDO and ARCHIVE logs.

PDBs (Pluggable Database) share the REDO (and ARCHIVE) with all other PDBs in the CDB (multitenant container database) they are hosted in.

The typical (default) Replicate change source reader uses the "Replicate Log Reader" aka as B-file reader, possibly with tweaks such as ASM, direct file access, copying the file to a temp location. In that configuration Replicate has to read through the entire logs to looking for object ID's for tables of interest for the appropriate container. After the upgrade the taks scan through all changes from all PDBs which could be similar to before, or could be 10 times more work.

With that it is ever more important to locate the Replicate server close to the source (geographically) and to 'tune' the log reader settings, perhaps even going to the 'archive only' settings to minimize the read and try reading again loop. It seems you are already using Logstream. Great as that minimizes the log reading. Maybe you can go even more aggressive on that, having just one logstream reader for each PDB - no matter how many tasks/tables that stream has to support.

You could make a test with the LogMiner log reader, but that option is being deprecated. Using Logminer access the log data is 'filtered' by Oracle itself creating more load on the DB server, but potentially significantly reducing network traffic.

Please consider engaging Qlik Professional Services for further help.

For further help here, consider sharing the source endpoint json such that folks trying to help know exactly what is currently being requested.

Hope this helps some, 

Hein.

View solution in original post

lyka
Support
Support

Hello,

 

We have a couple of articles about performance tuning:

 

https://community.qlik.com/t5/Knowledge/Latency-Performance-Troubleshooting-and-Tuning-for-Replicate...

 

and

 

https://community.qlik.com/t5/Knowledge/General-understanding-of-Qlik-Replicate-Change-Processing-Tu...

 

Also, if you want to know when to reach out to Professional Services or Support, please refer to this link:

 

https://community.qlik.com/t5/Knowledge/Qlik-Technical-Support-and-Professional-Services-When-to-rea...

 

Hope this helps!

 

Thanks

Lyka

View solution in original post

2 Replies
Heinvandenheuvel
Specialist III
Specialist III

In your original configuration the database had it's own REDO and ARCHIVE logs.

PDBs (Pluggable Database) share the REDO (and ARCHIVE) with all other PDBs in the CDB (multitenant container database) they are hosted in.

The typical (default) Replicate change source reader uses the "Replicate Log Reader" aka as B-file reader, possibly with tweaks such as ASM, direct file access, copying the file to a temp location. In that configuration Replicate has to read through the entire logs to looking for object ID's for tables of interest for the appropriate container. After the upgrade the taks scan through all changes from all PDBs which could be similar to before, or could be 10 times more work.

With that it is ever more important to locate the Replicate server close to the source (geographically) and to 'tune' the log reader settings, perhaps even going to the 'archive only' settings to minimize the read and try reading again loop. It seems you are already using Logstream. Great as that minimizes the log reading. Maybe you can go even more aggressive on that, having just one logstream reader for each PDB - no matter how many tasks/tables that stream has to support.

You could make a test with the LogMiner log reader, but that option is being deprecated. Using Logminer access the log data is 'filtered' by Oracle itself creating more load on the DB server, but potentially significantly reducing network traffic.

Please consider engaging Qlik Professional Services for further help.

For further help here, consider sharing the source endpoint json such that folks trying to help know exactly what is currently being requested.

Hope this helps some, 

Hein.

lyka
Support
Support

Hello,

 

We have a couple of articles about performance tuning:

 

https://community.qlik.com/t5/Knowledge/Latency-Performance-Troubleshooting-and-Tuning-for-Replicate...

 

and

 

https://community.qlik.com/t5/Knowledge/General-understanding-of-Qlik-Replicate-Change-Processing-Tu...

 

Also, if you want to know when to reach out to Professional Services or Support, please refer to this link:

 

https://community.qlik.com/t5/Knowledge/Qlik-Technical-Support-and-Professional-Services-When-to-rea...

 

Hope this helps!

 

Thanks

Lyka