Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
hello community,
We are using Oracle 19c as the source database and HDFS as the destination, opting for the Parquet file format for storage in HDFS.
In one of the tables I dealt with, the throughput count was zero and it took over 11 hours to load; initially, it just took 1 hour 15 minutes, and we could see the throughput count.
Additionally, there are some tables where we observed the throughput count as 0.
What could be the cause of this strange behaviour?
Hi @Pranita123 ,
Is it a full load stage or CDC stage?
I think you may create a new Full Load task for that table only. If you observe poor throughput, try to use a NULL target endpoint . This will help determine whether the issue lies with the source endpoint or the target endpoint.
If issue comes from the target, please try changing "Maximum file size(KB)" if it helps.
Regards,
Desmond
Thank you for your quick response. @DesmondWOO ,
Is it a full load stage or CDC stage?
-> In Full load stage and store changes stage
Hello @Pranita123
this is something more related to design. Check for type of data you have in tables. Check for Lobs columns and how big the tables and segregate the tables as per above.
if you identified problematic table, put it into different task and share the diagnostics package to analyze the issue.
Regards,
Sushil Kumar