Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Qlik Open Lakehouse is Now Generally Available! Discover the key highlights and partner resources here.
cancel
Showing results for 
Search instead for 
Did you mean: 
Pranita123
Partner - Contributor III
Partner - Contributor III

Oracle19c as source and HDFS as target

Hello,

Encountering the following error while loading data from Oracle as the source to HDFS as the target:

"Handling the end of table 'db.tablename' loading failed by subtask 1, thread 1.

Failed to handle a special table.

Failed to execute the 'insert into select' command. Return Code: SQL_ERROR, SqlState: HYT00, NativeError: 72. Message: [Cloudera][Hardy] (72) Query execution timeout expired. Failed (retcode -1) to execute the statement: INSERT INTO TABLE db.tablename  SELECT field1, field2  FROM db.tablename_att_tmp"

 Task Details: Full load + Store changes.(target file format: Parquet)

Could you please provide some insight into the reasons behind this issue?

 

Regards,

Pranita

Labels (3)
1 Solution

Accepted Solutions
john_wang
Support
Support

Hello @Pranita123 ,

You are right - there is not the cdcTimeout and loadTimeout in internal parameter for Hadoop target endpoint. Only executeTimeout is available, and it should help.

BTW, please check the Hadoop side resources usage, the timeout occurs while the temporary table data merge to final target table by executing the 'insert into select' command, this is operations inside Hadoop cluster, timeout may occurs while lack of resources  eg CPU and Memory etc.

Regards,

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!

View solution in original post

7 Replies
aarun_arasu
Support
Support

Hello @Pranita123 ,

Thanks for reaching out to Qlik community

It appears that you're encountering a timeout issue while loading data from Oracle to HDFS with the target file format set to Parquet. To address this, please follow these steps:

  1. Stop the Task

  2. Target Endpoint Settings:

    • Open the target endpoint.
    • Navigate to the "Advanced" tab.
    • Access the "Internal Parameters" section.
    • Add the following parameters:
      • cdcTimeout (default 600): Set to 12000 to allow for a longer timeout period.
      • executeTimeout (default 60): Set to 1200 to extend the timeout duration.
      • loadTimeout (default 1200): Set to 24000 to accommodate larger data transfers.
    • Save the endpoint configuration.
  3. Resume Task:

Below is the reference article
https://community.qlik.com/t5/Official-Support-Articles/Qlik-Replicate-Query-timeout-expired/ta-p/17...

 

Regards

Arun

aarun_arasu
Support
Support

Hello @Pranita123 ,

 

Additionally, I kindly request you to check the resource availability on your target database to ensure it has sufficient resources to execute the query effectively. Furthermore, I suggest trying to manually execute the query on the target database and observing the time it takes to complete. This manual execution can provide insights into any potential performance issues specific to the target environment.

 

Regards

Arun

aarun_arasu
Support
Support

Pranita123
Partner - Contributor III
Partner - Contributor III
Author

Thank you for your prompt response, @aarun_arasu 

Actually, we are unable to locate the cdcTimeout and loadTimeout in internal parameter .As I mentioned before, we are using HDFS as the target.

Regards,

Pranita

john_wang
Support
Support

Hello @Pranita123 ,

You are right - there is not the cdcTimeout and loadTimeout in internal parameter for Hadoop target endpoint. Only executeTimeout is available, and it should help.

BTW, please check the Hadoop side resources usage, the timeout occurs while the temporary table data merge to final target table by executing the 'insert into select' command, this is operations inside Hadoop cluster, timeout may occurs while lack of resources  eg CPU and Memory etc.

Regards,

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
Pranita123
Partner - Contributor III
Partner - Contributor III
Author


@john_wang  Thank you for your response.

We want to clarify whether this temporary table is generated only when we keep the Parquet format in HDFS, or if it is created for all file types ?

john_wang
Support
Support

Hello @Pranita123 ,

Not for all file types, for example it's not used for the CSV type table, it's used for file format conversion stage. However I cannot remember clear if it's used for Parquet, or for compression etc. if you need, I can confirm for your tomorrow.

Feel free to let me if you need any additional information.

Regards,

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!