Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Note: You may notice some temporary visual or styling issues in the Community. Our vendor is actively investigating.
cancel
Showing results for 
Search instead for 
Did you mean: 
Davew1
Contributor II
Contributor II

tLoop stalls after 30 iterations

I wonder if anyone can help me.

 

I have a rest API which returns 200 rows of data per request. I'm using a 'high water mark' as recommended by the API provider to identify the start record id. I'm using a tLoop to repeatedly call the API. I will eventually want to use a while loop to keep retrieving rows until none are returned but for initial testing I'm using a for loop with 100 iteration. The problem I'm having is that after the 30th iteration the processing just stalls until I kill the job. Can anyone shed any light on why this might be. It can't be related to the source data as I'm progressing through the data so the 30th batch in each case will always be different.

 

Here is a picture of the flow0683p000009M83Z.png

The pre sub job logs on to the API and retrieves an API key which is used in the subsequent calls in my loop. tDbinput_3 queries my high water mark table to get the maximum id which is fed into the 'Get Prospects' Call via a variable. Any ideas? Or suggestions how I might be able to de-bug.

 

Thanks in Advance

 

Labels (2)
3 Replies
Anonymous
Not applicable

If the job hangs it is always caused by locks in external resources.

Please check following:

Is there are some database locks? Please keep in mind a table (e.g. Exasol table) is sometimes completely locked by a transaction and cannot be read or written until the previous transaction has been finished.

The other source of locks are the web service.  Typically a webservice has not an open possible numbers of http connections.

I would replace step by step the output database and web service components by a dummy component (like tLogRow) and check which component cause the dead lock.

Davew1
Contributor II
Contributor II
Author

thank you @lli for those suggestions. I have created another job calling at different service for the same API and it does not get stuck at 30. I don't believe the issue is with the API concurrency as the API supports 5 concurrent connections and I can see from the far end of the API that I never have more than 1. I will investigate locks on the target db side to see if this is where the problem lies. That does make more sense as my data has flowed towards the end of the subjob when it hangs.

Davew1
Contributor II
Contributor II
Author

OK so it seems that the issue is with the API. if I write the data to a flat file rather than a db it stops at the same place (also if not writing the data anywhere)