Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi Everyone,
I am converting a legacy ETL(C++,Shell Script) logic into Talend and Big data environment.
I have imported the data in Hive tables from Sybase and now I want to read the data, perfom transformations and load in target hive tables. Primary challenge is data volume and the complex transformation logics. Which of the below approaches would give better performance and why:
1. Creating a standard job using ELT Hive components
2. Creating spark batch job
or if there is any other way then please share.
Thanks,
Rohini
Hi Rohini,
I believe Spark Batch job will be better in your case and you can use tHiveInput and tHiveOutput components. For all Sybase DB related operations you can use Standard job. So its all about synchronization of your jobs one after another.
Perform your normal tasks with Standard job and any big data related activities using Bigdata job.
Warm Regards,
Nikhil Thampi
Please appreciate our Talend community members by giving Kudos for sharing their time for your query. If your query is answered, please mark the topic as resolved 🙂