I'm not sure if qlik is responsible for this failure and would rather think that windows as OS and/or your network respectively storage system aren't capable to handle all these threads at the same time and that you reached at a certain time the max. number of (storing) handles which could be hold within the queue and/or that some timeouts happens - maybe there are settings which could be configured.
Otherwise you will need to reduce the max. numer of concurrent tasks and/or adding some delaying-methods - this meant to put the store-statement within an if-loop which checked if other qvd's already written/updated maybe per filetime() or similar and delayed then the storing per sleep-statement (whereby I think it's rather the worse case to fiddle something stable with such approach).
I like your idea (issue with # of available handles), but I do not think it's applicable in this case.
These tasks are running fine when I've set the Max Number of Simultaneous tasks to 1 or (usually) 2.
Setting this value to 3 creates loading errors most of the time...
I doubt that QS could not handle files (even if you consider all logs and other processes that are accessing the files) for more than 2 tasks at the same time.
My case is more like system resources (memory in particular) limitation... Just surprised that system is handling these issues that badly...
Hi Vlad, I will add my findings to your post too.
First, let me explain my theory on what is going on:
The Qlik Sense Tasks are not terminating the execution of the load script properly and the STORE command still running while the task was already terminated. That in some way, is locking the process. So the STORE command is not holding the tasks to be terminated.
What I did was to add this holding time in the load script by the following command, right after the STORE command:
LET _fwMessage = QvdNoOfRecords ('lib://My Library\myqvdfile.qvd');
LOOP WHILE (LEN('$(_fwMessage)') = 0)
The QvdNoOfRecords will return NULL if the QVD file is opened and still loading with the data from your Load Script. When it is ready the code will proceed as normal and the task will be terminated.
So far I have tested that for more than 2,000 times loading more than 2,000 QVD files and about 1TB of data without any error - and loading 10 tasks at the same time.
I hope this help you. If you test this and still have errors, please just let us know.
Thank you for a suggestion.
I've updated my code with script you've suggested and I was able to execute all my load process (with 4 tasks at a time allowed).
I will continue testing these processes more but it looks like you are correct about this issue.
Just an example (from my log of the task that used to fail before):
It took 10 of (!!!) cycles (50 seconds!!!) to close the QVD!
The output QVD contains 21,859,841 records.
"8/4/2016 3:12:12 AM| G_Perimeter STORE process completed. SLEEP loop is starting"
"8/4/2016 3:12:17 AM| G_Perimeter Loop. QVDNoOfRecords = "
"8/4/2016 3:12:22 AM| G_Perimeter Loop. QVDNoOfRecords = "
"8/4/2016 3:12:27 AM| G_Perimeter Loop. QVDNoOfRecords = "
"8/4/2016 3:12:32 AM| G_Perimeter Loop. QVDNoOfRecords = "
"8/4/2016 3:12:37 AM| G_Perimeter Loop. QVDNoOfRecords = "
"8/4/2016 3:12:42 AM| G_Perimeter Loop. QVDNoOfRecords = "
"8/4/2016 3:12:47 AM| G_Perimeter Loop. QVDNoOfRecords = "
"8/4/2016 3:12:52 AM| G_Perimeter Loop. QVDNoOfRecords = "
"8/4/2016 3:12:57 AM| G_Perimeter Loop. QVDNoOfRecords = "
"8/4/2016 3:13:02 AM| G_Perimeter Loop. QVDNoOfRecords = 21859841"
"8/4/2016 3:13:02 AM| G_Perimeter Increment load completed"
"8/4/2016 3:13:02 AM| G_Perimeter_ Table Num of Records = 21,859,841"
Another big QVD was closed after just 4 cycles (with 45,710,996 records), but by the time this QVD load was done all other QVD generators were already completed.
I will definitely escalate this case to support and I hope they will fix it ASAP.
If in fact "Qlik Sense Tasks are not terminating the execution of the load script properly and the STORE command still running while the task was already terminated" event is occurring - it qualifies as a MAJOR bug in my book :-).