Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello,
I have created a job to load data from SQL Server on-prem to Azure SQL Server. Source table is having 25 million records. My job flow is tMSSQLInput-->tMap-->tFileOutputDelimited-->tAzureStoragePut-->tDBRow. I am using bulk load script inside tDBRow, which will fetch the data from blob storage and insert into DB. Source and Destination tables are having the same schema. While running the job in tDBRow, I am getting data truncation error. When I checked file(in tFileOutputDelimited), length is same. Source and Destination column size are NVarchar(120). While writing into a file it is string with length 60. How to resolve this?
Hi
Maybe the error occurs on another column, not the column whose size are NVarchar(120) you mentioned. I would suggest to do more debugging to find which column/records throws the error.
Regards
Shong
Thanks for your response. Issue is with the encoding. When i changed to UTF-8, it worked. Currently I am facing com.microsoft.sqlserver.jdbc.SQLServerException:Read timed out error. When anyone of the job failed due to this all the other jobs running in parallel are failing