Miguel, pasting a mockup of my script to explain a bit more.
for vWeek= 1 to 52 for vTrans=1 to 500 LOAD colT1, colT2 FROM [http://yourdomain.com/history/weeks.asp?qsWeek=$(vWeek)&trans=$(vTrans)] (html, codepage is 1252, embedded labels, table is @2); next next
I don't know how many trans there are. It could be 2 or 234 or 478. Lets say that it is 234 -when the for loop reaches 235 and tries to load that, an error occures because there are no such url. When that happens I would like to loop to 500 (without errors and no saving (loading) so that we loop vWeek and start all over again.
Even more complicated since they use leading zeros so trans #1 is really 001 and so on. Sorted that out with if..elseif..else but it's a lot of nestling.
maybe you could use ScriptError and errormode together with proposed Exit for?
set ErrorMode=0; // will prevent from stopping script in case of error
for vWeek= 1 to 52
for vTrans=1 to 500
(html, codepage is 1252, embedded labels, t
able is @2);
exit for when ScriptError <>1;
I have a similar issue.
I'm reading a table where I can only pull 50k records at a time due to the limitations of an ODBC driver with a cloud based application. Right now I am counting the number of records so I know how many times to perform the loop. My issue is the simple count of records is taking a very long time (SELECT COUNT(*) FROM TABLE). I want to eliminate that step and just run the loop and have it stop when all the records have been read.
How would I do that? It seems so simple but I can't find the solution.
You could consider if incremental loads are possible - Incremental Load. If not you could look to a record-id from this table - maybe through meta-data from system-tables from these database. Also could a split from this task into two tasks help to improve your load-times. Then the first step to generate only the record-count could run before and parallel to other tasks.