Skip to main content
Announcements
See what Drew Clarke has to say about the Qlik Talend Cloud launch! READ THE BLOG
cancel
Showing results for 
Search instead for 
Did you mean: 
iti-attunity-sup
Partner - Creator II
Partner - Creator II

Parallel load does not work; partitons are loaded sequentially in some case

I have a question regarding parallel load.  

The customer would like to road the data in parallel because it is huge table.
I would like to know how to ensure that it is executed in parallel.

[Environment]
Qlik 6.6
Source : Oracle 10.2.0.5
Target : Oracle 19.3

Target settings :
'Use direct path full load' is FALSE

* using direct load in Qlik 6.6 causes table lock conflicts, to avoid the error the customer set it FALSE.

Task settings :
Target Table Preparation
If target table already exists: TRUNCATE before loading

* The customer has already created target tables/indexes.

As far as I have confirmed in-house, I found that some cases are loaded in parallel and others are loaded sequentially.

* OK ==> loaded in parallel

[TASK_MANAGER ]I: Start loading segment #1 of 16 of table 'SDH101'.'VFTCREDIT' (Id = 1) by subtask 1. Start load timestamp 000611794DE35E9B (replicationtask_util.c:765)
[TASK_MANAGER ]I: Start loading segment #2 of 16 of table 'SDH101'.'VFTCREDIT' (Id = 1) by subtask 2. Start load timestamp 000611794DE452BF (replicationtask_util.c:765)
[TASK_MANAGER ]I: Start loading segment #3 of 16 of table 'SDH101'.'VFTCREDIT' (Id = 1) by subtask 3. Start load timestamp 000611794DEE9338 (replicationtask_util.c:765)
[TASK_MANAGER ]I: Start loading segment #4 of 16 of table 'SDH101'.'VFTCREDIT' (Id = 1) by subtask 4. Start load timestamp 000611794DF4C60E (replicationtask_util.c:765)
[TASK_MANAGER ]I: Start loading segment #5 of 16 of table 'SDH101'.'VFTCREDIT' (Id = 1) by subtask 5. Start load timestamp 000611794DFBFBCF (replicationtask_util.c:765)
[SOURCE_UNLOAD ]I: Unload finished for segment #1 of segmented table 'SDH101'.'VFTCREDIT' (Id = 1). 100100 rows sent. (streamcomponent.c:3626)
[TARGET_LOAD ]I: Load finished for segment #1 of segmented table 'SDH101'.'VFTCREDIT' (Id = 1). 100100 rows received. 0 rows skipped. Volume transferred 397980304. (streamcomponent.c:3915)
[TASK_MANAGER ]I: Load finished for segment #1 of table 'SDH101'.'VFTCREDIT' (Id = 1) by subtask 1. 100100 records transferred. (replicationtask.c:2686)
[TASK_MANAGER ]I: Start loading segment #6 of 16 of table 'SDH101'.'VFTCREDIT' (Id = 1) by subtask 1. Start load timestamp 0006117951485662 (replicationtask_util.c:765)
[SOURCE_UNLOAD ]I: Unload finished for segment #3 of segmented table 'SDH101'.'VFTCREDIT' (Id = 1). 100100 rows sent. (streamcomponent.c:3626)
[TARGET_LOAD ]I: Load finished for segment #3 of segmented table 'SDH101'.'VFTCREDIT' (Id = 1). 100100 rows received. 0 rows skipped. Volume transferred 397980304. (streamcomponent.c:3915)
[TASK_MANAGER ]I: Load finished for segment #3 of table 'SDH101'.'VFTCREDIT' (Id = 1) by subtask 3. 100100 records transferred. (replicationtask.c:2686)
[TASK_MANAGER ]I: Start loading segment #7 of 16 of table 'SDH101'.'VFTCREDIT' (Id = 1) by subtask 3. Start load timestamp 0006117951AB1C73 (replicationtask_util.c:765)
[SOURCE_UNLOAD ]I: Unload finished for segment #5 of segmented table 'SDH101'.'VFTCREDIT' (Id = 1). 100100 rows sent. (streamcomponent.c:3626)
[TARGET_LOAD ]I: Load finished for segment #5 of segmented table 'SDH101'.'VFTCREDIT' (Id = 1). 100100 rows received. 0 rows skipped. Volume transferred 397980304. (streamcomponent.c:3915)
[TASK_MANAGER ]I: Load finished for segment #5 of table 'SDH101'.'VFTCREDIT' (Id = 1) by subtask 5. 100100 records transferred. (replicationtask.c:2686)


* NG ==> loaded sequentially

[TASK_MANAGER ]I: Start loading segment #1 of 16 of table 'SDH101'.'VFTCREDIT' (Id = 1) by subtask 1. Start load timestamp 000611797A8B1819 (replicationtask_util.c:765)
[SOURCE_UNLOAD ]I: Unload finished for segment #1 of segmented table 'SDH101'.'VFTCREDIT' (Id = 1). 100100 rows sent. (streamcomponent.c:3626)
[TARGET_LOAD ]I: Load finished for segment #1 of segmented table 'SDH101'.'VFTCREDIT' (Id = 1). 100100 rows received. 0 rows skipped. Volume transferred 397980304. (streamcomponent.c:3915)
[TASK_MANAGER ]I: Load finished for segment #1 of table 'SDH101'.'VFTCREDIT' (Id = 1) by subtask 1. 100100 records transferred. (replicationtask.c:2686)

[TASK_MANAGER ]I: Start loading segment #2 of 16 of table 'SDH101'.'VFTCREDIT' (Id = 1) by subtask 1. Start load timestamp 000611797B5F5DE9 (replicationtask_util.c:765)
[SOURCE_UNLOAD ]I: Unload finished for segment #2 of segmented table 'SDH101'.'VFTCREDIT' (Id = 1). 100100 rows sent. (streamcomponent.c:3626)
[TARGET_LOAD ]I: Load finished for segment #2 of segmented table 'SDH101'.'VFTCREDIT' (Id = 1). 100100 rows received. 0 rows skipped. Volume transferred 397980304. (streamcomponent.c:3915)
[TASK_MANAGER ]I: Load finished for segment #2 of table 'SDH101'.'VFTCREDIT' (Id = 1) by subtask 1. 100100 records transferred. (replicationtask.c:2686)

[TASK_MANAGER ]I: Start loading segment #3 of 16 of table 'SDH101'.'VFTCREDIT' (Id = 1) by subtask 1. Start load timestamp 000611797C6AF646 (replicationtask_util.c:765)
[SOURCE_UNLOAD ]I: Unload finished for segment #3 of segmented table 'SDH101'.'VFTCREDIT' (Id = 1). 100100 rows sent. (streamcomponent.c:3626)
[TARGET_LOAD ]I: Load finished for segment #3 of segmented table 'SDH101'.'VFTCREDIT' (Id = 1). 100100 rows received. 0 rows skipped. Volume transferred 397980304. (streamcomponent.c:3915)
[TASK_MANAGER ]I: Load finished for segment #3 of table 'SDH101'.'VFTCREDIT' (Id = 1) by subtask 1. 100100 records transferred. (replicationtask.c:2686)

[TASK_MANAGER ]I: Start loading segment #4 of 16 of table 'SDH101'.'VFTCREDIT' (Id = 1) by subtask 1. Start load timestamp 000611797D5675CA (replicationtask_util.c:765)
[SOURCE_UNLOAD ]I: Unload finished for segment #4 of segmented table 'SDH101'.'VFTCREDIT' (Id = 1). 100100 rows sent. (streamcomponent.c:3626)
[TARGET_LOAD ]I: Load finished for segment #4 of segmented table 'SDH101'.'VFTCREDIT' (Id = 1). 100100 rows received. 0 rows skipped. Volume transferred 397980304. (streamcomponent.c:3915)
[TASK_MANAGER ]I: Load finished for segment #4 of table 'SDH101'.'VFTCREDIT' (Id = 1) by subtask 1. 100100 records transferred. (replicationtask.c:2686)

[TASK_MANAGER ]I: Start loading segment #5 of 16 of table 'SDH101'.'VFTCREDIT' (Id = 1) by subtask 1. Start load timestamp 000611797E47D8A2 (replicationtask_util.c:765)
[SOURCE_UNLOAD ]I: Unload finished for segment #5 of segmented table 'SDH101'.'VFTCREDIT' (Id = 1). 100100 rows sent. (streamcomponent.c:3626)
[TARGET_LOAD ]I: Load finished for segment #5 of segmented table 'SDH101'.'VFTCREDIT' (Id = 1). 100100 rows received. 0 rows skipped. Volume transferred 397980304. (streamcomponent.c:3915)
[TASK_MANAGER ]I: Load finished for segment #5 of table 'SDH101'.'VFTCREDIT' (Id = 1) by subtask 1. 100100 records transferred. (replicationtask.c:2686)


Though I'm not sure what causes the difference exactly, I noticed that

- Just after creating target tables/indexes, data will be loaded in parallel
- Once data is loaded and then reloaded (without recreating the table/indexes, and truncated in the task), data is loaded sequentially.

Can we say that re-creating tables and indexes before reloading ensures parallel execution?

Labels (1)
2 Replies
avidary_qlik
Support
Support

Hi @iti-attunity-sup 

The version that you have been using is not supported for some time now.

Please open a support case about the Parallel load question.

as part of the solution, we may ask you to upgrade to the latest version of replicate.

 

Thank you

Avidar

Dana_Baldwin
Support
Support

Hi @iti-attunity-sup 

For your reference, here are the end support dates for Replicate: Qlik Replicate Product Lifecycle - Qlik Community - 1837201

When you upgrade, please check the release notes on your target version as more than one update will be required. Generally, you can only do a direct upgrade from the last two major versions.

Also, please check the user guide on your target version to confirm that your platform, source and target endpoint versions are supported. For example, the 2023.11 version requires Linux Red Hat 8 or compatible (if you use Linux).

Hope this helps!