Skip to main content
Announcements
Introducing Qlik Answers: A plug-and-play, Generative AI powered RAG solution. READ ALL ABOUT IT!
cancel
Showing results for 
Search instead for 
Did you mean: 
suraju
Contributor II
Contributor II

Qlik Replicate - Data load to Databricks Delta Target Endpoint getting failed

Hi -am getting the below error while trying to stream data from Oracle source endpoint to Databricks Delta Target Endpoint

am already using the same environment to load data from Oracle Source endpoint to Databricks (Cloud Storage) Target endpoint, post which Qlik Compose to load the data to Databricks Delta tables. its running fine in both pre-prod and production without any issues, 

since we are not using Databricks target as history-capture but simply as ODS, we are now planning & implement Qlik Replicate to directly apply the data to Databricks Delta table. 

the source and target endpoint connection shows no error, and as explained the same environments (Qlik Replicate, ADLS, Databricks) being used for loading other data to target, Please could you clarify will there be any other issues ? am unsure if this is all related to firewall issues, please could you assist ? 

Error Message

Handling End of table 'ccb'.'SP_OP_AREA' loading failed by subtask 1 thread 1
Failed to copy data of file F:\Attunity\Replicate\data\tasks\BGE_CCB_ADB_01\cloud\41\LOAD00000001.csv to database
RetCode: SQL_ERROR SqlState: HY000 NativeError: 35 Message: [Simba][Hardy] (35) Error from server: error code: '0' error message: 'org.apache.hive.service.cli.HiveSQLException: Error running query: org.apache.spark.SparkException: This Azure storage request is not authorized. The storage account's 'Firewalls and virtual networks' settings may be blocking access to storage services. Please verify your Azure storage credentials or firewall exception settings.
at org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$.runningQueryError(HiveThriftServerErrors.scala:48)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$execute$1(SparkExecuteStatementOperation.scala:611)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.unity.EmptyHandle$.runWith(UCSHandle.scala:124)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:501)
at org.apache.spark.sql.hive.
Failed (retcode -1) to execute statement: COPY INTO `ccb`.`sp_op_area` FROM(SELECT _c0 as `SP_ID`, _c1 as `FS_CL_CD`, _c2 as `OP_AREA_CD`, cast(_c3 as INT) as `VERSION`, _c4 as `hdr__oper`, cast(_c5 as TIMESTAMP) as `hdr__ts` from 'abfss://platform@preprod.dfs.core.windows.net//poc/landing/8354de61-aff5/41') FILEFORMAT = CSV FILES = ('LOAD00000001.csv.gz') FORMAT_OPTIONS('nullValue' = 'attrep_null', 'multiLine'='true') COPY_OPTIONS('force' = 'true')

Thanks & Regards

Suresh Raju

Labels (3)
2 Replies
DesmondWOO
Support
Support

Hi @suraju ,

As the error message indicates, please check the storage credentials or firewall/network settings. I would suggest to check with your Databricks Delta team. 

Regards,
Desmond

 

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
john_wang
Support
Support

Hello @suraju ,

Besides @DesmondWOO comment, pls have a look at similar article: This Azure storage request is not authorized.

Hope it helps.

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!