Skip to main content
Announcements
Join us at Qlik Connect for 3 magical days of learning, networking,and inspiration! REGISTER TODAY and save!
cancel
Showing results for 
Search instead for 
Did you mean: 
Jaruro2810
Contributor
Contributor

Spark Batch Job has different logging level in console than in YARN JobHistory

Hello,

I'm using Talend Big Data version 7.3.1 with the 2020-09 patch applied. I'm using a remote connection to the Talend Administration Center to work on a Spark Batch job (spark ver 2.4.0) in a Cloudera Enterprise 6.3.1 enviroment.

This spark batch job has configured the default log level of WARN on console, since it's unchanged in advanced settings.

Jaruro2810_0-1725383263808.png

Yet, for some reason, on the JobHistory logs it shows up on TRACE level.

[INFO ] 13:08:32 org.apache.spark.deploy.yarn.ApplicationMaster- ApplicationAttemptId: appattempt_1724052715153_5149_000002
[DEBUG] 13:08:32 org.apache.spark.util.ShutdownHookManager- Adding shutdown hook
[TRACE] 13:08:32 org.apache.hadoop.security.SecurityUtil- Name lookup for <<private url>> took 0 ms.
[INFO ] 13:08:32 org.apache.spark.deploy.yarn.ApplicationMaster- Starting the user application in a separate Thread
[INFO ] 13:08:32 org.apache.spark.deploy.yarn.ApplicationMaster- Waiting for spark context initialization...

This is the ONLY job in our project that does this. The other spark batches correctly generate up to WARN level in JobHistory, including newly created ones.

I'd like to find the reason behind this. Any help is appreciated.

Thanks, Jaruro2810.

Labels (5)
0 Replies