Skip to main content
Announcements
July 15, NEW Customer Portal: Initial launch will improve how you submit Support Cases. READ MORE

Qlik Replicate and how to Replicate very large table having greater than 600 million records

100% helpful (3/3)
cancel
Showing results for 
Search instead for 
Did you mean: 
shashi_holla
Support
Support

Qlik Replicate and how to Replicate very large table having greater than 600 million records

Last Update:

Feb 16, 2022 6:27:07 AM

Updated By:

Sonja_Bauernfeind

Created date:

Feb 9, 2022 5:29:55 PM

This article is based on the suggestions provided to replicate a very large table that had more than 600 million records in Oracle. The table full load took around 15 hours but was able to complete successfully and did not fail with the original error ORA-01555: snapshot too old: rollback segment number XX with name ""XX"" too small.

 

Environment

Resolution

Each business environment is different and there is no definitive method that works for all. Here is one such example which worked in this case and I'm pretty sure this would work for other similar cases as well. Please tune the Full Load settings to the following values.

  • Transaction Consistency Timeout = 1000
  • Commit Rate during Full load = 50000
  • Enable Internal parameter bulkArraySize  on the target endpoint and set it to 5000

Cause 

This was more of an environment-specific and we had to work with the resources we had at that time. Since the table had a lot of varchar columns the internal memory consumption was high. To avoid this we had to tune the Full Load parameters after a few iterations were able to arrive at the right settings. 

 

Labels (1)
Comments
bryce_leinan
Contributor III
Contributor III

@Sonja_Bauernfeind - we experienced a 20% performance increase on one of our tasks doing the  bulkArraySize  change. We also set our commit rate to 250000. 

Version history
Last update:
‎2022-02-16 06:27 AM
Updated by: