Skip to main content
Announcements
Qlik Connect 2024! Seize endless possibilities! LEARN MORE
cancel
Showing results for 
Search instead for 
Did you mean: 
steveosog
Contributor II
Contributor II

RDS Oracle source CDC to target stalls / fails to commit changes

CDC accumulates changes from RDS Oracle source but does not apply the changes to the target.

The monitoring of incoming changes shows the expected changes with the details: "Accumulating (0 transactions)(until source commit)" and "Applying (6808 transactions)(until target commit)", all of which are "On Disk". This does not change over time indicating that changes cannot be committed to the target. While the CDC task remains active, the target shows significant continuous processing activity. I have tried using both Redshift and Snowflake targets with the same behaviour and result.

What might cause this behaviour? Any suggestions or recommendations?

 

The RDS Oracle source is configured to use LogMiner supplemental logging with default settings (as per the documentation):

rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD'); rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD', p_type => 'ALL');  rdsadmin.rdsadmin_util.set_configuration(name => 'archivelog retention hours', value => '48');

Most source tables do not have primary keys defined.

The task CDC is configured for "Batch optimized apply" with default settings.

The initial load completes successfully without warnings or errors.

Labels (1)
4 Replies
MarinaEmbon
Employee
Employee

The "Batch optimized apply"  works only for tables with PK.

When the table doesn't have a PK, the task will move to "one by one".

Could you please open a case with the logs attached so that we would be able to check what exactly happens?

steveosog
Contributor II
Contributor II
Author

I am unable to log a support ticket.

Please refer to my other post: "Qlik Support portal registration for AWS Marketplace subscription Replicate license holders".

Barb_Fill21
Support
Support

Hi Steveosog,

Thanks for the information,  as you have mentioned  it sounds like all the transactions are "stuck" on the outbound side and are being stored to disk.  There are several things you need to do:

1. Put the PERFORMANCE  Logger to TRACE and the TARGET_APPLY to VERBOSE and let it run for about 3-5 minutes.  Examine the task log for "one-by-one " mode.  This means there is an issue with either the data or the layout( metadata of the target)

2.  Please check your Task settings,: go to  Error Handling and then Apply Conflict  Handling Policy and see if the exceptions are being Logged to the Exceptions table.  If so, that can cause latency on the outbound side of Replicate.  You may want to change those settings to  "perform an UPDATE on the Duplicate row attempted to be inserted" ,  and  choose to "Perform an INSERT if there is no record to UPDATE" on the target. This will help speed up processing if the  it is backed up on disk. 

3. Check for Network Issues or Disconnects. You might need some Internal parameters to allow for a longer  execution  time of  writing to the targets. 

4. A Full load  on a bad table ( if you can narrow it down to  a table level)  can be an option if it supports the business needs.  Or you can full load the task if it acceptable by the business,  as well. 

5. If you are using Snowflake, please be  at the latest Snowflake driver as there were a lot of Performance improvements recently. ( for example 2.21.8  or even 2.22.1 is recommended) 

6. If you are using an older release of Replicate , I do recommend to always try to be at the most current one ( Replicate 6.6  SP6 is current and 7.0 Is close to being released for General Availability  in November timeframe.) 

7. You mentioned  that most tables dont have PKs or UIs. This is helpful to have for most targets including Snowflake.  If you don't have PK's or UI's  then a DELETE or UPDATE on the target will take much longer to perform and cause your transactions to go to disk as well.  ( traffic Jam!   🙂   

Hope this helps! 

Barb

 

 

 

 

TirumaleshBabu_R
Contributor
Contributor

Hi Steveosog,

We are also facing same issue. May I know the solution for this how to resolve/avoid this issue.

Regards,

Tirumalesh