Skip to main content
Announcements
UPGRADE ADVISORY for Qlik Replicate 2024.5: Read More
cancel
Showing results for 
Search instead for 
Did you mean: 
NewbieQlik
Contributor III
Contributor III

Qlik Replicate Failure

I have created a replication task to replicate the data from source oracle DB to snowflake as part of my initial load.For some tables the load failed after loading some portion of the data from the table with below error.

Handling End of table 'schemaname'.'Tablename' loading failed by subtask 8 thread 1
Failed to put file '/opt/attunity/replicate/data/tasks/orcl_conn/cloud/94/LOAD00000005.csv', size 1559884194
Failed to send file /opt/attunity/replicate/data/tasks/orcl_conn/cloud/94/LOAD00000005.csv to Snowflake stage
RetCode: SQL_ERROR SqlState: HY000 NativeError: 40 Message: [Snowflake][Snowflake] (40)
Error encountered when executing file transfer: Failed to upload file /opt/attunity/replicate/data/tasks/orcl_conn/cloud/94/LOAD00000005.csv.

Failed (retcode -1) to execute statement: 'PUT 'file:///opt/attunity/replicate/data/tasks/orcl_conn/cloud/94/LOAD00000005.csv' @"snowflakedb"."PUBLIC"."ATTREP_IS_ABC_3c11ce5e_b751_4e72_833d_e64dae8e0389"/94/ AUTO_COMPRESS = TRUE SOURCE_COMPRESSION = NONE ;'

Trying to understand why this failed and how to resolve this

Labels (2)
18 Replies
Steve_Nguyen
Support
Support

you need to be on latest snowflake ODBC 2.25.xx ,, and if you still have problem , best to open support ticket to isolate down if replicate issue or Snowflake ODBC issue.

Help users find answers! Don't forget to mark a solution that worked for you! If already marked, give it a thumbs up!
Dana_Baldwin
Support
Support

Hi @AutomatedUser 

Are you using the parallel load feature? If so, can you try without? I found a case where the issue was caused by the table partition segment.

Thanks,

Dana

AutomatedUser
Contributor II
Contributor II

@Dana_Baldwin yes, we've experienced this while using parallel load configured to use both partitions and data ranges.  Since the table has 1.7B records, I don't think we can just do a straight load as that may take days.

AutomatedUser
Contributor II
Contributor II

@Steve_Nguyen all the documentation says to use 2.24.x.  What was the change in 2.25.x that resolves the issue?

Steve_Nguyen
Support
Support

correct, doc show 2.24, but we do see issue with 2.24 at time, so best to be on 2.25.xx newer

Help users find answers! Don't forget to mark a solution that worked for you! If already marked, give it a thumbs up!
AutomatedUser
Contributor II
Contributor II

For anyone that finds this, we were able to resolve this issue by changing a couple of parameters.

  1. When Qlik stages data, it first compresses the file before sending to Snowflake (or other cloud storage if you're using an external stage).  If you don't specifically set the location for this, it defaults to the AppData folder for the user under which your replication is running.  In our environment (Windows), we installed Qlik on the D drive of our server (with plenty of storage based on recommended specs); however, our C drive, where the OS is installed, is much smaller.  We saw the C drive storage filling up while these compressed files were being created.  Infact, it looks like it creates a temporary copy of the uncompressed file and then creates the compressed file in that temporary area in the AppData folder.  So this put extra load on the drive where our OS was installed and in some cases with larger table loads and/or parallel loading filled up the storage.  To work around this, we set the "put_tempdir" parameter in our ODBC parameters to point to the D drive where we had plenty of space.  You can also define this in the registry, adding values to the driver.  More info here.
  2. After much experimentation, we also were able to identify another configuration that drastically improved performance and eliminated the PUT error.  We reduced the max file size in the endpoint settings for Snowflake down to 25mb.  We originally had set this value to 1000mb and tried values from 100mb (default) up to the max of 2000mb (Qlik's recommendation during troubleshooting).  None of these resolved the problem; however, lowering this value down to 25mb did the trick.  This setting may be specific to our environment and can be affected by many things, but my best guess is that the PUT command was timing out since the compression of large files was taking too long as well as the actual upload into your stage.  I think our network traffic/congestion also may have played a role and by reducing the file size, helped alleviate the issue.

After making these changes, we were able to load 1.7b records (almost 2TB of data) into Snowflake in 2h30m.

john_wang
Support
Support

Hello @AutomatedUser ,

Thank you so much for you sharing the experience! this is very valuable information for all users!

Best Regards,

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
Bill_Steinagle
Support
Support

@AutomatedUser 

Good day can you open a Support case and we can have you enable Logging on the Endpoint and get the repsrv.log file with logging on the Server to help determine the issue? Also using the ODBC Manager and if you setup the Snowflake Target connection and can connect this is a good test to confirm the ODBC layer is connecting. Please let us know as per the  below ensuring you are using the 64 Bit ODBC Snowflake Driver.

Note: If this still has an issue please report a new Support case.

Run ODBC 64 Bit Manager as Administrator

Test the Connection with the credentials and if needed make sure you are on your VPN is required.

Bill_Steinagle_0-1675440278784.png

The Snowflake Endpoint should match:

Bill_Steinagle_1-1675440326983.png

 

 

Thanks!

Bill

AutomatedUser
Contributor II
Contributor II

@Bill_Steinagle I've already gone through this with Qlik support.  The case number was 00062390.  There are no issues connecting to Snowflake from the Qlik server and we are using the 64-bit ODBC Snowflake driver, v2.24.0.