Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi,
Am transferring some 5 tables from source to target which have 10+ million rows and it’s taking very long time ( 5 hours or even more) for loading data. Is there any performance tuning here ? That I can do . Please help with detail explanation of it. I have seen the tuning in the user guide. But how much value I need to set for each parameter. Please suggest source and target best practices with respect to performance. I mean without much burden on the source. Am asking for full load.
thanks
For Full Load, if both source and target endpoint support parallel loads, then enable parallel load based on a segmentation column. If parallel load is not an option, use the segmentation column and split the task into multiple. For example, out of 10 million records if there is a column named dept_id and has unique values 01,02,03. then you can create three tasks with filter on dept_id column. You need to ensure to select a column with even distribution of records. In above example, if dept_id 01 has 9 million out of 10 million data, that is not much beneficial. Finally, explore the full load tuning options as well and refer to user guide for more details.
Regards
JR
What is source and target end point?
For Full Load, if both source and target endpoint support parallel loads, then enable parallel load based on a segmentation column. If parallel load is not an option, use the segmentation column and split the task into multiple. For example, out of 10 million records if there is a column named dept_id and has unique values 01,02,03. then you can create three tasks with filter on dept_id column. You need to ensure to select a column with even distribution of records. In above example, if dept_id 01 has 9 million out of 10 million data, that is not much beneficial. Finally, explore the full load tuning options as well and refer to user guide for more details.
Regards
JR
HI
Thanks you for the reply , but in 6.5.0.756 version , this option is disabled. the source we are using is SAP Application DB and Target is Azure Synapse.
Hi @suvbin without the FL option, you can break the tasks in multiple tasks as described above. copying it here again.
If parallel load is not an option, use the segmentation column and split the task into multiple. For example, out of 10 million records if there is a column named dept_id and has unique values 01,02,03. then you can create three tasks with filter on dept_id column. You need to ensure to select a column with even distribution of records. In above example, if dept_id 01 has 9 million out of 10 million data, that is not much beneficial.
Regards
JR