Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi All,
My requirement is to read data from Hive and doing lookup with Oracle Stored procedure result and loading to target database Oracle. I could do this using Standard job in Talend Big data but I want to implement this using Spark batch job to improve the performance. For that all required components are not there in Big data batch job. The other way to do this is calling this standard job in Big data batch job. But how it will work in Spark configuration, will it execution in Spark mode or in execute JVM. If so is there any other possibilities to do implement these kind of scenarios.
Any help would greatly be appreciated.
Thanks in advance,
Soujanya
Hello,
Could you please let us know if this documentation helps?
https://help.talend.com/reader/3JJwnKmjiy4~HSJfqeFJfA/VJ5n2R8WmYR2abJfSKsv8A
Best regards
Sabrina