Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello everyone,
i'm using talend 7 on linux server. when ever i run a big data job on EMR 5.8 spark 2.2 version, i'm getting the below error.
Caused by: org.apache.spark.SparkException: Task failed while writing rows
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:191)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:190)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 24D95F4F197EEB81), S3 Extended Request ID: V6x04OyLMPA1qA1bbdPBkTJk15V9UUaSaNdHFCJb0G8daHE1gU1RPc3/ybWL/OxR+DtgfiX01fg=
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1587)
Does anyone know why this is happening.
Note: I'm getting this error randomly.
Hello,
Would you mind posting your bigdata job design screenshots on forum which will be helpful for us to address your issue?
Best regards
Sabrina