Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
I build a job which will ingest list of tables from oracle to hadoop, using standard job. I do ingestion through tSqoopImport. As I know, by using sqoop, job server will act as a driver only. Data will ingested directly from source to target, not through job server. No other's component in my job (tMap, tSort, etc), just tSqoopImport. I try to increase memory up to 50GB, but didn't solve the problem.
Is any of you have solution for this case guys?
Hi all, after doing simple mitigation by running the job and check memory usage. Memory (heap) usage still under allocated memory. The problem come because map and reduce memory allocation in yarn to small for the job. So I manually put map memory allocation to the job, on sqoop component -> advanced settings -> set memory for map and reduce as the job needed.
Hello,
What's your source DB? Could you please show us the full error stack trace? Did you allocate more memory for current active job or the whole studio?
Have you already checked this article about:https://community.talend.com/t5/Migration-Configuration-and/OutOfMemory-Exception/ta-p/21669?
Best regards
Sabrina
Hi all, after doing simple mitigation by running the job and check memory usage. Memory (heap) usage still under allocated memory. The problem come because map and reduce memory allocation in yarn to small for the job. So I manually put map memory allocation to the job, on sqoop component -> advanced settings -> set memory for map and reduce as the job needed.