Hi,
I am trying to read a HDFS file using tSparkLoad and print the output using tSparkLog..
But I am getting an error as below.
: org.apache.spark.scheduler.TaskSchedulerImpl - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
It is a batch processing job. And I do have sufficient memory 512mb. The file size which I am trying to access is just 30mb.
Its a simple job with structure as tsparkconnection ---->tsparkload ----> tsparklog
On which official version did you get that? Is there any running application when you look in the spark master UI ()? Could you please show us your job design screenshots with component setting?