Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
For the past few weeks Compose task randomly fails with the below error. Hive was working fine during the error when it was reported in qlik. Would like to underhand / fix the reason this is occurring. Also , if we re-run the task is starts processing.
**************
2024-01-15 20:35:46 [Engine ] [ERROR] Project: "CDP_PROD_Compose" , Task: Task ID 1209 Name: "CDP RAW to SAMS_CDC" ETL_TASK_FINISHED_WITH_ERROR, Error: sqlstate '08S01', errorcode '500051', message '[Cloudera][HiveJDBCDriver](500051) ERROR processing query/statement. Error Code: 2, SQL state: Error while compiling statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1703955353335_0003_4458_00, diagnostics=[Vertex vertex_1703955353335_0003_4458_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: _dummy_table initializer failed, vertex=vertex_1703955353335_0003_4458_00 [Map 1], java.io.IOException: java.lang.IllegalArgumentException
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:532)
at org.apache.tez.mapreduce.hadoop.MRInputHelpers.generateOldSplits(MRInputHelpers.java:476)
at org.apache.tez.mapreduce.hadoop.MRInputHelpers.generateInputSplitsToMem(MRInputHelpers.java:325)
at org.apache.tez.mapreduce.common.MRInputAMSplitGenerator.initialize(MRInputAMSplitGenerator.java:121)
at org.apache.tez.dag.app.dag.RootInputInitializerManager.lambda$runInitializer$3(RootInputInitializerManager.java:199)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
at org.apache.tez.dag.app.dag.RootInputInitializerManager.runInitializer(RootInputInitializerManager.java:192)
at org.apache.tez.dag.app.dag.RootInputInitializerManager.runInitializerAndProcessResult(RootInputInitializerManager.java:173)
at org.apache.tez.dag.app.dag.RootInputInitializerManager.lambda$createAndStartInitializing$2(RootInputInitializerManager.java:167)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.IllegalArgumentException
at java.base/java.util.concurrent.ThreadPoolExecutor.<init>(ThreadPoolExecutor.java:1293)
at java.base/java.util.concurrent.ThreadPoolExecutor.<init>(ThreadPoolExecutor.java:1179)
at java.base/java.util.concurrent.Executors.newFixedThreadPool(Executors.java:92)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getNonCombinablePathIndices(CombineHiveInputFormat.java:481)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:521)
... 17 more
]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1703955353335_0003_4458_01, diagnostics=[Vertex received Kill in INITED state., Vertex vertex_1703955353335_0003_4458_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1, Query: SELECT CONCAT(date_format(MIN(`start_time`), "yyyyMMdd'T'HHmmss"), '_', date_format(MAX(`end_time`), "yyyyMMdd'T'HHmmss"))
FROM `oms_staging`.`attrep_cdc_partitions`
WHERE `partition_name` <= '20240115T180000_20240115T190060'
AND `partition_name` > '20240115T180000_20240115T190060'.'
AND `partition_name` > '20240115T180000_20240115T190060'.'
***************
Thanks
Wert
Hello @wert1
Kindly try adding like below in the Setting
Retry on these SQL state classes:08S01
Retry also on these error codes: 500051
Run the task and observe the behavior, if you still facing the issue after adding those in retry mechanism then kindly open a support ticket.
Regards,
Suresh