Skip to main content
Announcements
See what Drew Clarke has to say about the Qlik Talend Cloud launch! READ THE BLOG
cancel
Showing results for 
Search instead for 
Did you mean: 
david_lange
Contributor II
Contributor II

How to troubleshoot large read times from archived Redo

We have seen a recent spike in Oracle archive Redo log reads.

What is a trouble shooting technique for investigating this?

For example 16 seconds to read 512 bytes.

Completed to read from archived Redo log 512,000,000 bytes at offset 000000001e848200 with rc 1, read time is 16272 ms, thread '2' (oradcdc_redo.c:1043)

Oracle DBA dont see any issues.

Labels (1)
1 Solution

Accepted Solutions
Heinvandenheuvel
Specialist III
Specialist III

@david_lange , you did good getting the 'PERFORMANCE' line from the log.  I  assume this is just one example of many similar. You mant to average them out for full impact analysis - I have a (Perl) script to help with that if you like.

512 MB in 16 seconds is 31 MB/sec or 300 Mb. Not the max for a gigabit link, but already 'up there'. What is the best (and worst) you have seen?

Can you provide more details on the source configurations and network connectivity - better than 1Gb link?

What is the exact source platform? (for example I know of issues with 'demoted' IOs for AIX reading (active) redo logs.)

What is the Oracle Redo/Archive configuration? ASM? high priority/speed lower size Redo vs bigger but slower Arch storage?

The sample line shows reading from Archive logs. Is that be by design or did it fall behind reading active redo?

Kindly provide the Source Endpoint JSON definition to see if there is anything 'odd' in there or indeed performance options missing which could be tried?

Hope this helps,

Hein

 

 

View solution in original post

3 Replies
Heinvandenheuvel
Specialist III
Specialist III

@david_lange , you did good getting the 'PERFORMANCE' line from the log.  I  assume this is just one example of many similar. You mant to average them out for full impact analysis - I have a (Perl) script to help with that if you like.

512 MB in 16 seconds is 31 MB/sec or 300 Mb. Not the max for a gigabit link, but already 'up there'. What is the best (and worst) you have seen?

Can you provide more details on the source configurations and network connectivity - better than 1Gb link?

What is the exact source platform? (for example I know of issues with 'demoted' IOs for AIX reading (active) redo logs.)

What is the Oracle Redo/Archive configuration? ASM? high priority/speed lower size Redo vs bigger but slower Arch storage?

The sample line shows reading from Archive logs. Is that be by design or did it fall behind reading active redo?

Kindly provide the Source Endpoint JSON definition to see if there is anything 'odd' in there or indeed performance options missing which could be tried?

Hope this helps,

Hein

 

 

david_lange
Contributor II
Contributor II
Author

Thanks Hein, Please provide the perl script to get the full impact.

I suspect it runs against the log files. Is that correct?

I will get the answer to the other questions and the endpoint json.

 

Thanks

-Dave

Heinvandenheuvel
Specialist III
Specialist III

Attached Perl script can be useful in evluating Oracle Redo/Arch read performance.

Be sure to read the help ( -h ) carefully notable MB/sec vs XB/sec and -g option for time window.

Start with 

Perl oracle_read_performance.pl -h

Provide a Replicate task log file with PERFORMANCE data as input.

Enjoy!

Hein.