Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Qlik Named a 7-Time Gartner® Magic Quadrant™ Leader: See the 2026 Report
cancel
Showing results for 
Search instead for 
Did you mean: 
SushCNX
Contributor
Contributor

Holding of archive logs of oracle

Hi Team,

We have instance where Oracle DBA complaints that Qlik Replicate user was holding the archive logs from getting deleted from the server , and due to which high archival logs retention their production database got down. 

is this the behavior of qlik replicate while dealing with oracle db?

please explain.

2 Solutions

Accepted Solutions
Dana_Baldwin
Support
Support

Hi @SushCNX 

Qlik Replicate does not directly prevent archived redo logs from being deleted. During high volume activity that generate large and numerous redo logs, e.g. index rebuild/maintenance, may cause the task to fall behind and need to read from the archived logs. Only files actively being read might be held open and unable to be deleted.

If an archived log is purged that the task needs to read in order to not miss any changes, it will need to be restored and marked as available in v$archived_log, or the task will need to be reloaded.

Your DBA may need to extend the retention policy and allow adequate disk space for such times of high volume. As a general rule, we recommend retaining 3 days of archived logs to allow for more recovery options if something happens over a weekend, but your needs may vary.

You can use these Oracle queries to check database changes to understand how the volume of data impacts the task.

Daily change volume:

select trunc(first_time, 'dd') "DATE",count(1) "Number of files", SUM(BLOCKS * BLOCK_SIZE)/1024/1024 "Size in MB"
from v$archived_log
group by trunc(first_time, 'dd')
order by trunc(first_time, 'dd') desc


Log switch frequency (Oracle suggests tuning logs to switch no more than 2-4 times per hour). Please edit the dates in the where clause before running the query:

SELECT
to_char(first_time,'YYYY-MON-DD') day,
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'9999') "00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'9999') "01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'9999') "02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'9999') "03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'9999') "04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'9999') "05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'9999') "06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'9999') "07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'9999') "0",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'9999') "09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'9999') "10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'9999') "11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'9999') "12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'9999') "13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'9999') "14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'9999') "15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'9999') "16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'9999') "17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'9999') "18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'9999') "19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'9999') "20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'9999') "21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'9999') "22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'9999') "23"
From
V$log_History Where Trunc(First_Time) Between
trunc(to_date('30.01.2026', 'dd.mm.yyyy'))
And
trunc(to_date('02.02.2026', 'dd.mm.yyyy'))
Group By
to_char(first_time,'YYYY-MON-DD');

I hope this helps!

Thanks,

Dana

View solution in original post

DesmondWOO
Support
Support

Hi @SushCNX ,

Qlik Replicate processes archived logs sequentially, one by one. This should not prevent the DBA from purging archived redo logs.

The retention period is not controlled by Qlik Replicate; however, customers may choose to extend it because latency can delay the reading of redo logs, or to ensure recovery from an earlier timestamp (for example, if a task is stopped due to an error during a holiday). 

If you suspect that Qlik Replicate has locked the archived log, please follow Dana’s comment and review the task log to determine what occurred when the task was reading the archived log. If you remain unsure after reviewing, please open a support ticket for further assistance.

Regards,
Desmond

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!

View solution in original post

4 Replies
Dana_Baldwin
Support
Support

Hi @SushCNX 

Qlik Replicate does not directly prevent archived redo logs from being deleted. During high volume activity that generate large and numerous redo logs, e.g. index rebuild/maintenance, may cause the task to fall behind and need to read from the archived logs. Only files actively being read might be held open and unable to be deleted.

If an archived log is purged that the task needs to read in order to not miss any changes, it will need to be restored and marked as available in v$archived_log, or the task will need to be reloaded.

Your DBA may need to extend the retention policy and allow adequate disk space for such times of high volume. As a general rule, we recommend retaining 3 days of archived logs to allow for more recovery options if something happens over a weekend, but your needs may vary.

You can use these Oracle queries to check database changes to understand how the volume of data impacts the task.

Daily change volume:

select trunc(first_time, 'dd') "DATE",count(1) "Number of files", SUM(BLOCKS * BLOCK_SIZE)/1024/1024 "Size in MB"
from v$archived_log
group by trunc(first_time, 'dd')
order by trunc(first_time, 'dd') desc


Log switch frequency (Oracle suggests tuning logs to switch no more than 2-4 times per hour). Please edit the dates in the where clause before running the query:

SELECT
to_char(first_time,'YYYY-MON-DD') day,
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'9999') "00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'9999') "01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'9999') "02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'9999') "03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'9999') "04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'9999') "05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'9999') "06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'9999') "07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'9999') "0",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'9999') "09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'9999') "10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'9999') "11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'9999') "12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'9999') "13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'9999') "14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'9999') "15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'9999') "16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'9999') "17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'9999') "18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'9999') "19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'9999') "20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'9999') "21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'9999') "22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'9999') "23"
From
V$log_History Where Trunc(First_Time) Between
trunc(to_date('30.01.2026', 'dd.mm.yyyy'))
And
trunc(to_date('02.02.2026', 'dd.mm.yyyy'))
Group By
to_char(first_time,'YYYY-MON-DD');

I hope this helps!

Thanks,

Dana

DesmondWOO
Support
Support

Hi @SushCNX ,

Qlik Replicate processes archived logs sequentially, one by one. This should not prevent the DBA from purging archived redo logs.

The retention period is not controlled by Qlik Replicate; however, customers may choose to extend it because latency can delay the reading of redo logs, or to ensure recovery from an earlier timestamp (for example, if a task is stopped due to an error during a holiday). 

If you suspect that Qlik Replicate has locked the archived log, please follow Dana’s comment and review the task log to determine what occurred when the task was reading the archived log. If you remain unsure after reviewing, please open a support ticket for further assistance.

Regards,
Desmond

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
SushCNX
Contributor
Contributor
Author

Hi Dana & Desmond,

Thanks for your response. I came to know that  DBA has encountered below error.

RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process.

And only qlik replicate is reading the logs, as per the DBA.

 

Regards,

Sushant

 

Dana_Baldwin
Support
Support

Hi @SushCNX 

Is the database configured for Data Guard?

RMAN-08137

warning: archived log not deleted, needed for standby or upstream capture process

Cause

An archived log that should have been deleted was not as it was required by upstream capture process or Data Guard. The next message identifies the archived log.


Action

This is an informational message. The archived log can be deleted after it is no longer needed. See the documentation for Data Guard to alter the set of active Data Guard destinations. See the documentation for Streams to alter the set of active streams.

copied from: RMAN-08137 - Database Error Messages

Thanks,

Dana