[resolved] Talend: The code of method exceeding the 65535 bytes limit...
I am loading data from Salesforce to SQL. I've done it for few objects successfully. I am getting a problem while loading data from one object. The source object has over 500 columns. I am just dumping the data from source to target after some data type conversion. Initially, I auto mapped all the columns and it failed. I thought of mapping only few columns but still it is giving me the same error. However, I've not modified schema to include just 10 columns that I was using.
I am getting compilation error-The code of method is exceeding the 65535 limit.
I've checked other similar posts but I don't know what exactly I should do. It is java limitation but how I can solve it in talend. Please let me know what I should do with this Talend job.
Thanks,
Jayesh.
I guess you need the tMap because of different data types or slightly different names.
I am not sure but could you please check if your shorten the column names you could solve this issue.
I strongly suggest doing this in the exported xml format of the schema and not with the schema editor it self - this would take ages!
Ah, I didn't even realize that I don't need a tMap... Thanks much for pointing this out. I've taken it out, but the error is still occurring. Sorry, I am not clear on this: Is your suggestion to shorten the column names in the source table or the target table?
The only way to shorten the names are if you have a tMap and shorten the names from the source schema. Keep in mind reading from a database does not depends on the correct schema names, only the position and the type is important.
But I guess, if you cannot solve the situation be removing the tMap my suggestion will probably not solve the problem.
What about reading a bunch of columns. Actually from the perspective of an business model it could hardly be the case an object has that huge amount of attributes. I am pretty sure you can model more than one business object.
You would need to read the datasets multiple times because you will not get everything at once and you should think about more the one output table - because of the mentioned split of the object into multiple objects.
Both, but the number of columns have much more influence to the code size than the identifier names.
I am pretty sure if you use 2 x 250 columns in 2 different subjobs it will work.