Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello all,
I have an issue, maybe you can help me:
So, I have a job that loads data from two delimiters files, make a lookup and write in a positional file. But the name of the file depends on one field, so, I have:
Row=AH and Row=BH, than I will have two files with all information, this and the others rows. Now I have something like are in image in the annex.
My problem is the time is taking to processing all the data, I will have 8M of lines, so will take some hours to run, anyone can help to optimize this job?
Thanks,
Elisabete
The number of dynamic files depends on the field. If the field has 4 distinct records I will have 4 files.
I've done, I iterate first by this field (removing duplicates), then I lookup the files with the rest of information, so as I'm iterating only by a few records max 20, the job runs faster.
thanks.
I'm not sure I fully understand, but I *think* I may have an idea of how you can fix this. If you have 8M rows of data, how many dynamic files are you likely to have? You seem to be iterating over the data on each row. What I *think* you need to do is group all of your data by the file name that it is going to and send all of the data to the respective file in one go. This will remove the latency of opening each file on every row. Is that what you are trying to do.
The number of dynamic files depends on the field. If the field has 4 distinct records I will have 4 files.
I've done, I iterate first by this field (removing duplicates), then I lookup the files with the rest of information, so as I'm iterating only by a few records max 20, the job runs faster.
thanks.