Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Qlik GA: Multivariate Time Series in Qlik Predict: Get Details
cancel
Showing results for 
Search instead for 
Did you mean: 
Anonymous
Not applicable

Error : Exception in thread "main" java.lang.NoClassDefFoundError: com.mysql.jdbc.Driver Big Data

Hello all,

 

I have an error in one of my job. I usually scheduled my jobs and no further incidents occur. But when one of my job finished running, it returns an error saying this :

 

[FATAL]: bda_prod.bmstgics_main_initial_0_1.BMSTGICS_Main_Initial - tRunJob_2 Child job returns 1. It doesn't terminate normally.
Exception in thread "main" java.lang.NoClassDefFoundError: com.mysql.jdbc.Driver
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:264)
	at bda_prod.ansokrpf_to_hive_initial_0_1.ANSOKRPF_to_Hive_Initial.tMysqlConnection_1Process(ANSOKRPF_to_Hive_Initial.java:2005)
	at bda_prod.ansokrpf_to_hive_initial_0_1.ANSOKRPF_to_Hive_Initial.tJava_1Process(ANSOKRPF_to_Hive_Initial.java:1842)
	at bda_prod.ansokrpf_to_hive_initial_0_1.ANSOKRPF_to_Hive_Initial.runJobInTOS(ANSOKRPF_to_Hive_Initial.java:8640)
	at bda_prod.ansokrpf_to_hive_initial_0_1.ANSOKRPF_to_Hive_Initial.runJob(ANSOKRPF_to_Hive_Initial.java:7926)
	at bda_prod.ansokrpf_main_initial_0_1.ANSOKRPF_Main_Initial.tRunJob_1Process(ANSOKRPF_Main_Initial.java:2741)
	at bda_prod.ansokrpf_main_initial_0_1.ANSOKRPF_Main_Initial.tWaitForSqlData_1Process(ANSOKRPF_Main_Initial.java:2208)
	at bda_prod.ansokrpf_main_initial_0_1.ANSOKRPF_Main_Initial.tOracleConnection_1Process(ANSOKRPF_Main_Initial.java:1917)
	at bda_prod.ansokrpf_main_initial_0_1.ANSOKRPF_Main_Initial.runJobInTOS(ANSOKRPF_Main_Initial.java:7102)
	at bda_prod.ansokrpf_main_initial_0_1.ANSOKRPF_Main_Initial.main(ANSOKRPF_Main_Initial.java:6385)

[ERROR]: bda_prod.bmstgics_main_initial_0_1.BMSTGICS_Main_Initial - tParallelize_1 - null

The schema had several tables, to be exact there are 21 tables in this job. It returns only 20 tables into Hadoop and 1 table caught an error.

 

Please do so kindly to help with this problem, and feel free to ask any questions regarding the error above.

 

Thank you.

Regards,

Sulaiman

Labels (3)
11 Replies
Anonymous
Not applicable
Author

Hello Sabrina,

 

As it turns out, the job experienced some OutOfMemoryErrors as well. This error leads to the failure of running job by scheduled in the TAC.

 

There are many possibilities that could lead to this error, some of them is that the java memory is full for running many processes. We could add more memory but that would cost more, delete some jobs that run parallel.

 

The most effective way is that we assign jobs and run them in sequence, after the job is finished running, run the other job but this would take time depends on the data on that job. It could take 5 hours to run a large sum of data in just one run, but allocating to a respective time, and schedule all jobs would lead to all stable.

 

Thank you

Anonymous
Not applicable
Author

Hello,

Thanks for your feedback and sharing this scenario with us on forum.

Best regards

Sabrina