Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Join us in Toronto Sept 9th for Qlik's AI Reality Tour! Register Now
cancel
Showing results for 
Search instead for 
Did you mean: 
Rohini_B01
Contributor
Contributor

Designing Spark batch Job for implementing ETL on Hive tables

Hi Everyone,

 

I am converting a legacy ETL(C++,Shell Script) logic into Talend and Big data environment.

I have imported the data in Hive tables from Sybase and now I want to read the data, perfom transformations and load in target hive tables. Primary challenge is data volume and the complex transformation logics. Which of the below approaches would give better performance and why:

 

1. Creating a standard job using ELT Hive components 

2. Creating spark batch job 

 

or if there is any other way then please share.

 

Thanks,

Rohini

Labels (1)
1 Reply
Anonymous
Not applicable

Hi Rohini,

 

     I believe Spark Batch job will be better in your case and you can use tHiveInput and tHiveOutput components. For all Sybase DB related operations you can use Standard job. So its all about synchronization of your jobs one after another. 

 

     Perform your normal tasks with Standard job and any big data related activities using Bigdata job.

 

Warm Regards,
Nikhil Thampi

Please appreciate our Talend community members by giving Kudos for sharing their time for your query. If your query is answered, please mark the topic as resolved 🙂