Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
hi folks
really grateful for any help on this one please.
There is an excellent Knowledge Base item here from @KellyHobson explaining how to cater for Carriage Return and Line Feed in the field contents. I have come across the same issue but can't seem to resolve in the same way.
The contents of my source CSV file are shown in a screen grab here:
And the configuration of my CSV Source Endpoint is here:
I am pretty sure I have missed something obvious but have been trying various combinations of the Source Endpoint configuration but can't find a solution.
All / any help would be fantastic.
Many thanks, jonna
ps - the CSV is attached in txt format to permit upload
pps - diagnostic pack attached too
Hmm, not immediately clear to me.
There is more info in the log: Record number: 17, Offset: 127
If one starts counting at 0, then that's the first line with quotes, and a column delimiter within the quotes and a linefeed within the quotes. Still, the 127 does match. I suspect it matches a Replicate internal maximum delimiter size of 128 or something like that.
For your test I would start simple, and with well spaces test records to avoid thinking about 0 or 1 to start counting.
No looking the task JSON in the diagnotsics I do NOT see the db_settings for the source which the pictures suggest. Are you running the right task? I think it should look something similar to:
"db_settings": {
"$type": "FileSettings",
"syntax": "FileSyntax",
"csvDelimiter": ",",
"csvRowDelimiter": "\\r\\n",
"csvStringQuote": "\"",
I do see you are trying to replace CR and LF - maybe try without that action first?
"replacements": [{
"src_codepoint": 10,
"trg_codepoint": 90
}, {
"src_codepoint": 13,
"trg_codepoint": 90
}]
Hein.
sorry, omitted this earlier:
I don't see attachments.
Was the csv line line with issue indicated? Is it sure to be part of the screenshot - which notepad++ line(s)?
Best guess is a data line with quote character (double quote ) followed by a field delimiter within that final text column. Such combination looks like the end of the quoted text and that quote character in the text would need to be escaped.
Hein.
Thanks very much for prompt reply, txt file and diagnostics attached.
I will take a look at your suggestions and see what changes I can apply - will keep you updated on here.
Thanks, Jonna
Hmm, not immediately clear to me.
There is more info in the log: Record number: 17, Offset: 127
If one starts counting at 0, then that's the first line with quotes, and a column delimiter within the quotes and a linefeed within the quotes. Still, the 127 does match. I suspect it matches a Replicate internal maximum delimiter size of 128 or something like that.
For your test I would start simple, and with well spaces test records to avoid thinking about 0 or 1 to start counting.
No looking the task JSON in the diagnotsics I do NOT see the db_settings for the source which the pictures suggest. Are you running the right task? I think it should look something similar to:
"db_settings": {
"$type": "FileSettings",
"syntax": "FileSyntax",
"csvDelimiter": ",",
"csvRowDelimiter": "\\r\\n",
"csvStringQuote": "\"",
I do see you are trying to replace CR and LF - maybe try without that action first?
"replacements": [{
"src_codepoint": 10,
"trg_codepoint": 90
}, {
"src_codepoint": 13,
"trg_codepoint": 90
}]
Hein.
Hey @jonna
I saved the attached txt as a .csv, set the record delimiter to \r\n and was able to get the data to replicate to a file target on a full load.
Looking at your diagnostic package I see the record delimiter is set to \n. Can you try again with \r\n?
I've attached my test task json to compare if you would like to import to review.
Let me know if you are still having trouble with running the task.
Best,
Kelly
thanks @Heinvandenheuvel and @KellyHobson - I will make a start on your ideas / suggestions / proposals first thing back at my desk tomorrow.
many thanks, jonna
Hi Jonna,
did any of the solutions worked for you?
Thanks
Lyka
hi folks
thanks @lyka and @Heinvandenheuvel and @KellyHobson
I have done a test file and it worked a treat. I haven't, however, tested the final solution just yet - the CSV has a couple of hundred columns and I need to squeeze some time in to complete it. For now I will gratefully give the answers the thumbs up and set the query to ACCEPTED.
Hopefully I wont need any more help, but'll refer back here if that is the case.
Cheers, Jonna