Skip to main content
Announcements
See what Drew Clarke has to say about the Qlik Talend Cloud launch! READ THE BLOG
cancel
Showing results for 
Search instead for 
Did you mean: 
bindupenmatsa
Partner - Contributor II
Partner - Contributor II

Log performing speed of task

The DBA provided 1500 tables for replication, for which we have separate endpoint connections for full load and CDC. The full load is completed and we have started CDC in batch mode 12 days ago. We have CDC logs from 16th Jun till date, and the task is currently processing the logs of June 18th. We request your assistance in increasing the speed and performance of the CDC tasks.
Task settings for CDC:
1)Enabled Apply changes using SQL MERGE
2)
Transaction Offload Tuning
Offload transactions in progress to disk if:
Total transactions memory size exceeds (MB): 5000
Transactions duration exceeds (seconds): 60000

3)
Apply batched changes in intervals:
Longer than (seconds): 59
But less than (seconds): 60
Force apply a batch when processing memory exceeds (MB): 2048
 
4) started the CDC task with provided SCN

PFA logs and diagnostic package.

6 Replies
Heinvandenheuvel
Specialist III
Specialist III

>>  we have separate endpoint connections for full load and CDC. The full load is completed and we have started CDC

WHY separate? You better have a real good reason otherwise that's an exceedingly dumb configuration.

Just make it a normal full+cdc task and start a fresh load. Forget about the time wasted loading and catching up so far. Forget about all the lovely planning and start over.

Hein.

 

bindupenmatsa
Partner - Contributor II
Partner - Contributor II
Author

We were not provided ASM privileges and logs were getting stored outside ASM. We were given separate end point connection for Full Load and CDC. So we are going with backup folder outside ASM with SCN provided by the customer for CDC.
We are running this CDC task in batch mode by enabling SQL merge to resolve Bill Spike on Big Query [Target]

Heinvandenheuvel
Specialist III
Specialist III

Trying to circumvent the need for ASM access is not a good reason. Just say NO.

Either your work is important enough and you get the right credentials or it is not. This is not your battle. Let the management fight this one out. DBA's are protectuve and they should be, but in the end they are servants who need to listen to business requirements.

The DBA's do realize that once they gave access to redo logs on alternative storage, the tool is going to see ALL the data come by, from every table. They are not really giving away anything more with SYSASM, except their 'pride'.

The DBA's will have to trust Replicate usage at some point. The best way for that is for THEM to trace the access and for YOU to share a sample reptask log with a bit of startup with SOURCE_UNLOAD and SOURCE_CAPTURE DEBUG (or VERBOSE for a very short time) 

SwathiPulagam
Support
Support

Hi @bindupenmatsa ,

 

For Oracle as a source SQL Merge is not supported.
The Apply changes using SQL MERGE and Optimize inserts options are not supported with the following source endpoints:

  • Salesforce
  • Oracle

Below is the user guide link for your reference:

https://help.qlik.com/en-US/replicate/November2023/Content/Global_Common/Content/SharedEMReplicate/C...

 

Thanks,
Swathi

SushilKumar
Support
Support

Hello @bindupenmatsa 

Qlik replicate is not about just install and start the task. it more about proper Solution design and goal is to replicate data from one DB to another DB. 

During Configuration with involved endpoints, it clearly mentioned recommend Setup. if customer opt for alternate settings, then they have to compensate with the performance or latency.

Read from ASM is way faster than the normal file system (alternate location normal folder or disk)

the parameter mentioned above speed up the processing post data capture or scan. Here it seems QR is not able to read or scan faster than it should be as if the alternate location has network constraint or Disk have lower read speed.

For Such case Customer should engage PS ( Paid service ) for correct Solution design and capacity required to process the indented volume of data,

Regards,

Sushil Kumar 

SachinB
Support
Support

Hello @bindupenmatsa ,

As discussed over call, actually we have seen the read speed time from oracle is too low at your end, The latency building depends on you read rates, if 50mb taking more than half sec which shows evidence either their network or the disk i/o issue.

we consider 100 MB per second a good read speed. However, your task is taking 5 minutes to process just 50 MB of data. Kindly check with your network team for the same.

 

Best architecture should read the full load and CDC from same task.

Regards,

Sachin B