Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi Community,
I have a task, task1 which has settings for parallel load enabled at 5, I have 40 tables in my task1, when I run this task, initially 5 by 5 tables are there in the loading section, but after a couple of huge tables are in the loading segment, some tables get stuck in queue.
So even when I have my parallel load setting for the full load task set at 5, I can only see 2 tables loading at once for quite a long time, and 7 tables are in queue waiting to get started.
I checked the analytics via the QEM analytics section and found that the Machine CPU was peaking at 80/90%
Can someone help me identify the cause for this?
Hello @vinayak_m ,
It sounds like the performance bottleneck is being caused by the combination of large tables and high CPU usage, which is limiting how many tables can load in parallel, even though you've set the parallel load to 5.
Since CPU is peaking at 80-90%, the system might be resource-bound. Consider upgrading the hardware (adding more CPU cores or memory) or distributing the load across multiple machines.
Normally bigger table consumes more memory and CPU usage during full load.
Please refer the below link for the same.
how to Replicate very large table
Regards,
Sachin B
Hello @vinayak_m
Kindly set SOURCE_UNLOAD to TRACE to check why the tables are taking long time to complete.
Regards,
Suresh
Hi @sureshkumar ,
Thanks for responding.
Those are big tables and I don't have a concern with why they are taking a long time, my question is why are my tables in queue, when they should be moved to the loading section.
If your response is still the same, then I'll proceed with changing the source_unload log to trace.
Hello @vinayak_m ,
It sounds like the performance bottleneck is being caused by the combination of large tables and high CPU usage, which is limiting how many tables can load in parallel, even though you've set the parallel load to 5.
Since CPU is peaking at 80-90%, the system might be resource-bound. Consider upgrading the hardware (adding more CPU cores or memory) or distributing the load across multiple machines.
Normally bigger table consumes more memory and CPU usage during full load.
Please refer the below link for the same.
how to Replicate very large table
Regards,
Sachin B
Thanks for your help @SachinB ,
Will try out the configurations shared by you and will see if it helps us.