Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Qlik and ServiceNow Partner to Bring Trusted Enterprise Context into AI-Powered Workflows. Learn More!
cancel
Showing results for 
Search instead for 
Did you mean: 
PraveenP
Partner - Contributor
Partner - Contributor

Experiancing frequent time out in

Hi ,
We are facing time out issue get below error frequently in replicate task we are using SAP HANA  as source and Microsoft Azure Sql database as target 

Target: T___12 (Microsoft Azure SQL Database) Error Code: 1022502 Error details: Error cleaning the log Failed to execute 'Cleanup' statement.

RetCode: SQL_ERROR  SqlState: S1000 NativeError: 613 Message: [SAP AG][LIBODBCHDB DLL][HDBODBC] General error;613 execution aborted by timeout Failed (retcode -1) to execute statement: 'DELETE FROM "x"."attrep_cdc_changes" WHERE "EVENT_TIME" < ? AND "INDX" < ?'

Please help on this issue.


Labels (2)
2 Solutions

Accepted Solutions
Dana_Baldwin
Support
Support

Hi @PraveenP 

The issue appears to be related to target database performance or network performance between Replicate and the target. There are timeout values you can specify in the target endpoint, Advanced tab, Internal Parameters:

cdcTimeout, default 600 seconds

executeTimeout, default 120 seconds

loadTimeout, default 1200 seconds

Increasing these to 3x or 4x the default may help, but it is a work around. You may need to increase the values further or find the root cause of the performance issue to resolve it.

You can type in the parameters by name (case sensitive) or you can enter an exclamation point and scroll down to select each one.

After your edits, click OK to close the window, click save on the endpoint, and then start/resume the tasks that use this endpoint so they go into effect (endpoints are only read/processed upon task startup).

Hope this helps!

Dana

View solution in original post

Dana_Baldwin
Support
Support

Hi @PraveenP 

I apologize, my mistake.

There are two timeout parameters you can set on the source endpoint in the same manner as described above:

executeTimeout, default 60 seconds

loadTimeout, default 1200 seconds

Please check with the source DBA on performance.

Thanks,

Dana

View solution in original post

4 Replies
Dana_Baldwin
Support
Support

Hi @PraveenP 

The issue appears to be related to target database performance or network performance between Replicate and the target. There are timeout values you can specify in the target endpoint, Advanced tab, Internal Parameters:

cdcTimeout, default 600 seconds

executeTimeout, default 120 seconds

loadTimeout, default 1200 seconds

Increasing these to 3x or 4x the default may help, but it is a work around. You may need to increase the values further or find the root cause of the performance issue to resolve it.

You can type in the parameters by name (case sensitive) or you can enter an exclamation point and scroll down to select each one.

After your edits, click OK to close the window, click save on the endpoint, and then start/resume the tasks that use this endpoint so they go into effect (endpoints are only read/processed upon task startup).

Hope this helps!

Dana

PraveenP
Partner - Contributor
Partner - Contributor
Author

Hi @Dana_Baldwin ,
We are facing issue from Source SAP HANA side,

RetCode: SQL_ERROR  SqlState: S1000 NativeError: 613 Message: [SAP AG][LIBODBCHDB DLL][HDBODBC] General error;613 execution aborted by timeout Failed (retcode -1) to execute statement: 'DELETE FROM "x"."attrep_cdc_changes" WHERE "EVENT_TIME" < ? AND "INDX" < ?'

Dana_Baldwin
Support
Support

Hi @PraveenP 

I apologize, my mistake.

There are two timeout parameters you can set on the source endpoint in the same manner as described above:

executeTimeout, default 60 seconds

loadTimeout, default 1200 seconds

Please check with the source DBA on performance.

Thanks,

Dana

sureshreddyaddula
Contributor
Contributor

Hi Praveen, 

We are running into similiar error where HANA is source. could you please share the details of how it was resolved? We tried adjusting the parameters, but unfortunately, it did not resolve the problem.