Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello,
I have an issue with tsalesforceoutput in talend 6.4.1.
While using the tsalesforceoutput with a commit level set above 1, the first batch (successfully updated within the salesforce org) isn't logged into the output files.
As you can see, 199 records (the first ones to be updated) were lost in the execution, but a quick check in salesforce shows that those records were successfully updated.
Could you point me onto the right direction to resolve this issue, or at least tell me which version of talend can be used without this error ?
Thanks.
Hi,
Have you tried to clear "Extend Insert" check box to see if rejected rows can be logged into your excel file? on ? Did you get NULL values?
Best regards
Sabrina
Hi Sabrina,
I've tried it, all the records are succesfully updated and logged in the excel file (no null value) but the time to do so is nearly 9 min, which is way too long.
Best regards,
Nicolas
Hi,
I've tried to resolve my problem for 15 days, but i am still stuck to the same point, have you found something ?
Thanks,
Nicolas
Hello,
Try to add a tLogCatcher component to retrieve errors.
You can put the information in a file or sent them by email.
I think that it will help you.
Moreover, you can add a tLog component between the component tSalesforceOutput and the error file.
Best regards,
Marine
Hi,
Please make sure that you have selected 'Cease on Error' check box to stop the execution of the Job when an error occurs in advanced settings of tSalesforceOutput.
Best regards
Sabrina
Hi,
@MarineTiphon : I tried your method, it successfully logged all the records onto the excel (which is a good thing), but didn't separate the wrong ones from the good ones. It makes me wonder if the error is only on the Reject row ?
@xdshi: I selected the 'Cease on error' checkbox, it stops the job at the first wrong record, but doesn't tell me at exactly which row the error occurs, although it gives me the error's information
It kinda get closer from what i expect this component to do, but i would like (if possible) to get the import job
to go through the all of my records, then separate into two distincts files all the success / errors.
Best regards
Nicolas
Hi,
@Moe, the following will not solve your case but may help.
It sounds like a bug or at least a strange behaviour from the tSalesforceOutput component or the salesforce API.
I've realized the following tests with 253 records (plus 1 header line) from an input file with 1 expected rejected record (due to blank name).
Here are the results:
My conclusions:
When "Extend Insert" option is ticked, we know records are sended to the API by batches of n records where n depends on the value of "Commit Level" parameter (refer to Salesforce documentation).
With this option, as soon as an error occurs, result for success and reject flows is wrong (except when "Commit Level" = 1) but the operation on Salesforce side is rigth.
My question:
Where is this f**k**g bug?
My advices:
If you care about error messages, avoid options 1 and 2.
If you care about response time (with low data volume), choose option 1.
If you care about error messages and response time, consider Bulk option (should be reserved for high data volume, but sometimes...).
Feel free to complete or correct if necessary.
@Moe, does this help or not?
Please let us know and if it does, don't forget to give Kudo and accept the solution if you have solved your case thanks to the proposed solution.
Hello,
I also faced this problem. What I noticed is that the header of the source file is not removed before the content is split into batches, but after. Consequently, the first batch of data to be sent to Salesforce will have number of data records = commit level -1. Probably an event is not triggered in this case because neither the main, nor the rejected flow continues. It is more obvious if you execute the job with a file that has less records than the batch size. You will see the status "Starting" on both flows.
By analyzing your 1st example:
"
"
Commit level 200
Input file total number of records: 254 = 253 data records + 1 header record;
Batch 1: 200 data records - 1 header records = 199 (below commit level, so the records are not transferred to the success/rejected files);
Batch 2: 54 remaining data records. Probably there is a special behaviour implemented for the last batch since the total number of records will not always be a multiplier of the batch size. However, this behaviour does not apply if the file has ONLY one incomplete batch.
Please try to execute the job with a file that does not have a header and let me know if it works. For me it did.
Have fun,
Florentina