Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
I am trying to process 2gb flat file using spark big data job.
it takes very long time (3+) to just read the file.
i have also updated no of executors and executor memory (reluctantly). it doesn't work either.
any suggestions is appreciated
@uganesh , how are you executing your job ? are you executing from Studio?
@manodwhb thanks for quick response.
I am running Job on RE (submit from studio) , hosted on my AWS env and connecting to EMR cluster.
@uganesh, so you were building a job and copying that zip file in the remote engine and executing .sh file?
@uganesh, all executors are utilizing?