Skip to main content
Announcements
A fresh, new look for the Data Integration & Quality forums and navigation! Read more about what's changed.
cancel
Showing results for 
Search instead for 
Did you mean: 
Anonymous
Not applicable

Oracle to Hadoop Data Migration - Best/Efficient Way to do Onetime/Delta(Ongoing) Load - Please guide ?

How to build a Data Warehouse from current Operational Data Systems stored in Oracle and move the data  to hadoop cluster and

then implementing processes that allow to update the Data Warehouse with source systems daily (or periodically).

Please guide using Talend how one can  do the following in a best/efficient way please - ?

 

1. Initial load into Data Warehouse ( Source - Oracle Target - Hadoop cluster - hdfs ) ?  please tell the detailed steps ?

 

2. Periodical delta loads into Data Warehouse ( Source - Oracle Target - Hadoop cluster - hdfs ) ? please tell the detailed steps ?

Labels (3)
1 Solution

Accepted Solutions
Anonymous
Not applicable
Author

Hi,

 

    If you have to extract data from a source table without any joins or sub queries, you can directly do it in a Bigdata Batch job as shown below.

0683p000009M7nt.png

 

But, if you have join conditions or complex queries, I would recommend to use a Talend standard job to extract the data and use HDFS components in Standard job to load the data to target Hadoop cluster.

 

The difference between one off and daily will be the difference in data volume or filter condition. You need to also make sure that you are having adequate memory allocated for the job by changing the memory parameters of the job.

 

Warm Regards,
Nikhil Thampi

Please appreciate our Talend community members by giving Kudos for sharing their time for your query. If your query is answered, please mark the topic as resolved

 

View solution in original post

1 Reply
Anonymous
Not applicable
Author

Hi,

 

    If you have to extract data from a source table without any joins or sub queries, you can directly do it in a Bigdata Batch job as shown below.

0683p000009M7nt.png

 

But, if you have join conditions or complex queries, I would recommend to use a Talend standard job to extract the data and use HDFS components in Standard job to load the data to target Hadoop cluster.

 

The difference between one off and daily will be the difference in data volume or filter condition. You need to also make sure that you are having adequate memory allocated for the job by changing the memory parameters of the job.

 

Warm Regards,
Nikhil Thampi

Please appreciate our Talend community members by giving Kudos for sharing their time for your query. If your query is answered, please mark the topic as resolved