Skip to main content
Announcements
Introducing Qlik Answers: A plug-and-play, Generative AI powered RAG solution. READ ALL ABOUT IT!
cancel
Showing results for 
Search instead for 
Did you mean: 
harsh2
Partner - Contributor III
Partner - Contributor III

qlik replicate - unable to proceed from main loop execution

Hi Team,

I'm using IBM DB2 as a source and Google BigQuery as a target in my task, but when I run the task, it is stuck in the main loop.

task only has one table with 52 records.

How do I resolve this problem?

harsh2_0-1700131505466.png

 

1 Solution

Accepted Solutions
john_wang
Support
Support

Hello @harsh2 ,

Thanks for your cooperation. Finally we found out the reason. The same query took 2 hours to response even run it in STRSQL, the query is:

SELECT OBJLONGSCHEMA, OBJLONGNAME from table(QSYS2.OBJECT_STATISTICS('*ALL', 'FILE')) J WHERE JOURNAL_LIBRARY='APACDB' and JOURNAL_NAME='QSQJRN' and JOURNALED='YES'​

Where APACDB QSQJRN is the journal library name and journal name.

Please enable option "Skip Journal Validation" in the "IBM DB2 for iSeries" source endpoint to skip the journal validation, a sample:

john_wang_0-1701182959865.png

Regards,

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!

View solution in original post

9 Replies
harsh2
Partner - Contributor III
Partner - Contributor III
Author

logs : 

00012292: 2023-11-16T14:45:20 [TASK_MANAGER ]I: All stream components were initialized (replicationtask.c:3697)
00009500: 2023-11-16T14:45:20 [SOURCE_CAPTURE ]I: Last known stream position not found for this task. Starting from current time (db2i_endpoint_capture.c:1292)
00008496: 2023-11-16T14:45:20 [SORTER ]I: Sorter last run state: confirmed_record_id = 0, confirmed_stream_position = '' (sorter_transaction.c:3306)
00009500: 2023-11-16T14:45:20 [SOURCE_CAPTURE ]I: Time gap between Replicate and DB2 server is -4021 seconds and -556948 microseconds (db2i_endpoint_util.c:664)
00009500: 2023-11-16T14:45:20 [SOURCE_CAPTURE ]I: UTC gap = 19800 seconds (local minus UTC) (db2i_endpoint_util.c:602)
00009500: 2023-11-16T14:45:21 [SOURCE_CAPTURE ]I: Initial positioning 'now' by current timestamp '2023-11-16 15:42:22' minus 600 seconds for transaction consistency (db2i_endpoint_capture.c:1407)
00009500: 2023-11-16T14:45:21 [SOURCE_CAPTURE ]I: Initial capture query: SELECT journal_code,journal_entry_type,sequence_number,commit_cycle,entry_timestamp,object,cast(null_value_indicators as VARBINARY(8000)) null_value_indicators,count_or_rrn,receiver_library,receiver_name,"CURRENT_USER",job_name,program_name, minimized_entry_data,cast(entry_data as VARBINARY(32740)) entry_data FROM TABLE(QSYS2.Display_Journal('LACTDTA','PAXUS', OBJECT_OBJTYPE=>'*FILE', STARTING_TIMESTAMP=>'2023-11-16 15:42:22', JOURNAL_CODES=>'CDFRJ', JOURNAL_ENTRY_TYPES=>'PT,PX,UB,UP,DL,DR,BR,UR,CG,DF,CT,SC,CM,CR,RB,FN,PR', STARTING_RECEIVER_LIBRARY=>'*CURLIB', STARTING_RECEIVER_NAME=>'*CURCHAIN')) AS J WHERE (journal_entry_type in('SC','CM','CT','RB','PR') OR (substr(object,1,20) in ('ZFPCPF LACTDTA '))) (db2i_endpoint_capture.c:1363)
00008496: 2023-11-16T14:55:13 [SORTER ]I: Task is running (sorter.c:714)
00008496: 2023-11-16T15:05:13 [SORTER ]I: Task is running (sorter.c:714)
00008496: 2023-11-16T15:15:13 [SORTER ]I: Task is running (sorter.c:714)
00008496: 2023-11-16T15:25:14 [SORTER ]I: Task is running (sorter.c:714)
00008496: 2023-11-16T15:35:14 [SORTER ]I: Task is running (sorter.c:714)
00008496: 2023-11-16T15:45:15 [SORTER ]I: Task is running (sorter.c:714)
00008496: 2023-11-16T15:55:15 [SORTER ]I: Task is running (sorter.c:714)
00008496: 2023-11-16T16:05:16 [SORTER ]I: Task is running (sorter.c:714)
00008496: 2023-11-16T16:15:16 [SORTER ]I: Task is running (sorter.c:714)
00008496: 2023-11-16T16:25:17 [SORTER ]I: Task is running (sorter.c:714)
00008496: 2023-11-16T16:35:18 [SORTER ]I: Task is running (sorter.c:714)
Dana_Baldwin
Support
Support

@harsh2 

Please increase logging for ~5 minutes as follows and see if it provides helpful information:

Performance = trace

Sorter = trace

Source_Capture = verbose

Source_Unload = verbose

Target_Apply = verbose

Target_Load = verbose

Please closely monitor the disk space where the data directory resides during this time as the logs will be large. Shouldn't be needed for only 5 minutes, but you can configure when to trigger new task logs in the server settings so files won't be too large to manage: Setting automatic roll over and cleanup #Setting automatic roll over and cleanup | Qlik Replicate He...

Hope this helps,

Dana

harsh2
Partner - Contributor III
Partner - Contributor III
Author

Hi @Dana_Baldwin 

 

I already set all to verbose and run the task for 5 minutes.

PFA Logs of task 

 

Thanks & regards

Harsh Patel

john_wang
Support
Support

Hello @harsh2 ,

Seems the gap caused by the getting changes SQL. How about if you run the query manually (eg in STRSQL) to see how long it takes?

A sample SQL (take from the task log file):

SELECT *
FROM TABLE(QSYS2.Display_Journal('LACTDTA','PAXUS', OBJECT_OBJTYPE=>'*FILE',

STARTING_TIMESTAMP=>'2023-11-16 18:48:32',JOURNAL_CODES=>'CDFRJ', JOURNAL_ENTRY_TYPES=>'PT,PX,UB,UP,DL,DR,BR,UR,CG,DF,CT,SC,CM,CR,RB,FN,PR',

STARTING_RECEIVER_LIBRARY=>'*CURLIB', STARTING_RECEIVER_NAME=>'*CURCHAIN')) AS J
WHERE (journal_entry_type in('SC','CM','CT','RB','PR') OR (substr(object,1,20) in ('ZFPCPF LACTDTA ')))

If it takes a long time (eg several minutes) then please check DB400 side to see why it's slow. or you may adjust the starting timestamp to get fewer rows to see how about the result.

Hope this helps.

Regards,

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
harsh2
Partner - Contributor III
Partner - Contributor III
Author

Hello @john_wang,

As you pointed out, after executing the select query, it completed within a few seconds with no significant delays. Is there any potential optimization or adjustment we can make on the Qlik Replicate side to enhance performance?

Thank &  Regards,

Harsh Patel

john_wang
Support
Support

Hello @harsh2 ,

Thanks for your update.

Please set SOURCE_UNLOAD/SOURCE_CAPTURE to Verbose and rerun the task to see if the issue persists. If yes please open a support ticket and attach the Diag Packages, support team will help you further.

Regards,

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
john_wang
Support
Support

Hello @harsh2 ,

Thanks for your cooperation. Finally we found out the reason. The same query took 2 hours to response even run it in STRSQL, the query is:

SELECT OBJLONGSCHEMA, OBJLONGNAME from table(QSYS2.OBJECT_STATISTICS('*ALL', 'FILE')) J WHERE JOURNAL_LIBRARY='APACDB' and JOURNAL_NAME='QSQJRN' and JOURNALED='YES'​

Where APACDB QSQJRN is the journal library name and journal name.

Please enable option "Skip Journal Validation" in the "IBM DB2 for iSeries" source endpoint to skip the journal validation, a sample:

john_wang_0-1701182959865.png

Regards,

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
harsh2
Partner - Contributor III
Partner - Contributor III
Author

Hi @john_wang

It worked !  
Thanks You So Much 

Thanks & regards
Harsh Patel

john_wang
Support
Support

Thank you so much for your feedback! @harsh2 ,It's a glad to work with you.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!