The file system that holds the online REDO log (usually a Linux file system or NFS also seen on Solaris and AIX) platforms will cache the Online REDO logs , Oracle writes these files in “direct” mode and does not trigger the refresh of the system cache accordingly.
Qlik Replicate can not read the REDOs in “direct “mode because it has no agent running on the DB server.
So when replicate reads them, it will read old versions of the REDO logs (for example replicate might ask for the content of a REDO Log with sequence 104, but get the content of REDO log sequence 100, from before the last log switch).
To be able to read the REDOs in “regular” mode in a reliable way, It is necessary to evict the REDOs from the system cache periodically.
We refer to this internally as the ‘caching problem’. This problem exposes itself in many different ways.
Resolution
From the Qlik Replicate side there is unfortunately not a lot we can do. A lot depends on the file system the customer uses for redo logs. If this is not mounted for direct IO (and thus buffered by the file/system cache) we can have this problem.
Three options can be applied to mitigate this:
The first option requires changes to file system settings and placing the (online) redo log files on an unbuffered file share. Alternatively, set Qlik Replicate to Use archived redo logs only (if business requirements (latency) allow that).
Go to your Source Endpoint Connection
Go to the Advanced tab
Check Use archived redo logs only
The second workaround is to frequently flush the cached redo log data. This way Qlik Replicate will read the right blocks.
The third workaround is to enable Oracle LogMiner.