Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
See why IDC MarketScape names Qlik a 2025 Leader! Read more
cancel
Showing results for 
Search instead for 
Did you mean: 
kyb515
Contributor
Contributor

Big data batch job failed while running from another standard/big data batch job

Hello,

I am getting error while trying to run a big data batch job 'B' from another big data batch job 'A' or from a standard job using tRunJob.

 

But when I try to run Big data job 'B' independently, it is running fine.

Below is the code written in the job 'B' 

In tJava  - > 

 

SparkSession spark = SparkSession.builder().enableHiveSupport().config("hive.exec.dynamic.partition", "true").config("hive.exec.dynamic.partition.mode", "nonstrict").getOrCreate();

 

spark.sql("select cast(count(*) as string) from Database_1.missing_files_config WHERE data_source_name = 'source1' AND source_table_name = 'table1'").show();

 

And in tJava Advanced Settings -> 

import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.Encoders;
import org.apache.hadoop.fs.LocatedFileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.RemoteIterator;
import org.apache.spark.sql.SaveMode;
import java.io.FileNotFoundException;
import java.io.FileInputStream;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import org.apache.hadoop.fs.FSDataOutputStream;

 

Getting below error->

java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field
org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD

Labels (3)
0 Replies