Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello,
I am quite new to the usage of Talend. I am trying to load large volume of data from Snowflake DB into Talend for conversion to JSON using Talend Data mapper in a specified format to be loaded to the downstream systems. The catch here is the output file size should not exceed 15MB. Hence need to split the output based on the size. Could you please guide how this could be implemented.
Thanks
Hello,
Could you tell us how large this file is likely to be?
Is it possible to use tFileInputRaw to load it entirely in memory? The work flow should be: tFileInputRaw --> tExtractJSONFields.
You could try a route using cFile and streaming Splitter via Camel solution.
Best regards
Sabrina
Hi Sabrina,
The overall file size is in 10 to 15 GB. But acceptable for the downstream system is only 15MB max. Could you please help with an example or some screenshots of the components.
hello Sriram,
Do you get any solutions I too currently facing same problem. If there any solutions please let me know
Thanks in advance
regards
Fazil M