Skip to main content
Announcements
See what Drew Clarke has to say about the Qlik Talend Cloud launch! READ THE BLOG
cancel
Showing results for 
Search instead for 
Did you mean: 
Pranita123
Partner - Contributor III
Partner - Contributor III

Issues with Oracle 19c to HDFS Data Transfer: Zero Throughput and Performance Discrepancy

hello community,

We are using Oracle 19c as the source database and HDFS as the destination, opting for the Parquet file format for storage in HDFS.

In one of the tables I dealt with, the throughput count was zero and it took over 11 hours to load; initially, it just took 1 hour 15 minutes, and we could see the throughput count.
Additionally, there are some tables where we observed the throughput count as 0.
What could be the cause of this strange behaviour?

Labels (3)
3 Replies
DesmondWOO
Support
Support

Hi @Pranita123 ,

Is it a full load stage or CDC stage? 

I think you may create a new Full Load task for that table only. If you observe poor throughput, try to use a NULL target endpoint . This will help determine whether the issue lies with the source endpoint or the target endpoint. 

If issue comes from the target, please try changing "Maximum file size(KB)" if it helps.

Regards,
Desmond

 

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
Pranita123
Partner - Contributor III
Partner - Contributor III
Author

Thank you for your quick response.  @DesmondWOO ,

Is it a full load stage or CDC stage? 

-> In Full load stage and store changes stage

 

SushilKumar
Support
Support

Hello @Pranita123 

this is something more related to design. Check for type of data you have in tables. Check for Lobs columns and how big the tables and segregate the tables as per above. 

if you identified problematic table, put it into different task and share the diagnostics package to analyze the issue.

Regards,

Sushil Kumar