
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Oracle to Hadoop Data Migration - Best/Efficient Way to do Onetime/Delta(Ongoing) Load - Please guide ?
How to build a Data Warehouse from current Operational Data Systems stored in Oracle and move the data to hadoop cluster and
then implementing processes that allow to update the Data Warehouse with source systems daily (or periodically).
-
Please guide using Talend how one can do the following in a best/efficient way please - ?
1. Initial load into Data Warehouse ( Source - Oracle Target - Hadoop cluster - hdfs ) ? please tell the detailed steps ?
2. Periodical delta loads into Data Warehouse ( Source - Oracle Target - Hadoop cluster - hdfs ) ? please tell the detailed steps ?
Accepted Solutions

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
If you have to extract data from a source table without any joins or sub queries, you can directly do it in a Bigdata Batch job as shown below.
But, if you have join conditions or complex queries, I would recommend to use a Talend standard job to extract the data and use HDFS components in Standard job to load the data to target Hadoop cluster.
The difference between one off and daily will be the difference in data volume or filter condition. You need to also make sure that you are having adequate memory allocated for the job by changing the memory parameters of the job.
Warm Regards,
Nikhil Thampi
Please appreciate our Talend community members by giving Kudos for sharing their time for your query. If your query is answered, please mark the topic as resolved

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
If you have to extract data from a source table without any joins or sub queries, you can directly do it in a Bigdata Batch job as shown below.
But, if you have join conditions or complex queries, I would recommend to use a Talend standard job to extract the data and use HDFS components in Standard job to load the data to target Hadoop cluster.
The difference between one off and daily will be the difference in data volume or filter condition. You need to also make sure that you are having adequate memory allocated for the job by changing the memory parameters of the job.
Warm Regards,
Nikhil Thampi
Please appreciate our Talend community members by giving Kudos for sharing their time for your query. If your query is answered, please mark the topic as resolved
