Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi All,
im pulling one SQL table from Sql server database.
steps need to be followed.
Steps taken
im not able to write all the excel sheets.
complexity issue are:
Please some one suggest the better idea to overcome the issue . Eagerly looking for any positive response.
@manigandan J, can you share a screenshot of your job? I want to see how you write each part of data into a different sheet.
Regards
Shong
If you process the data iteratively, check the 'enable parallel execution' box, see below. This option will improve
the performance for large volumes of data.
simply read one same table from Database and putting in one excel file contains several sheets
one way i have tried
Another way i dropped all the data in tbufferoutput and using tbufferinput read the data again and again
Note: im using same excel file put in the different sheets (i.e. 30 sheets) as per business requirements.
how to control memory utilization. each quarter the data is increasing and needs to process periodically to provide the reports to business and to validate (i.e. cross check ) the data.
Could you please suggest some ideas and great helpful for me
yes i agree but each sheets we have different data. Here i think we cant use Tflowtoiterate.
because each sheets have different columns and also some metadata add in tmap using expressions to populate it.
its something i should handle better way 30 sheets in one excel file and also each quarter data is increasing.
for this quarter i can compare 1 million records and table columns having 67 fields (i.e. data is huge)
may be next quarter it will be increase
im getting the below error always
INFO: Extending cell ranges for conditional formats. Use formats from row: 2
INFO: Extending cell range for tables...
INFO: Create new extended DataValidations (last written row: 3943), number of validations: 0
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 1048592 bytes for Chunk::new
# An error report file with more information is saved as:
# C:\Users\Manigandan.J\workspace\.repositories\1866045447\master\D365FINOPS\poms\jobs\process\Reconciliation\Product_Reconjobs\match_products_recon_optimization_0.1\src\main\ext-resources\hs_err_pid36152.log
INFO: Extending cell ranges for conditional formats. Use formats from row: 2
INFO: Extending cell range for tables...
INFO: Create new extended DataValidations (last written row: 9694), number of validations: 0
#
# Compiler replay data is saved as:
# C:\Users\Manigandan.J\workspace\.repositories\1866045447\master\D365FINOPS\poms\jobs\process\Reconciliation\Product_Reconjobs\match_products_recon_optimization_0.1\src\main\ext-resources\replay_pid36152.log
@Maxime GRAZZINI , Sorry for the delay in replying. I was on vacation the other day. From the error, there is not enough memory available for the job to continue execution. Try the following way and see if the same error still occurs.
1 Make sure you have enough available memory for the big volume of data processing.
2 Split the job to several child jobs, each child job write some of sheets, use tRunJob to call the child jobs and check the 'use an independent process to run subjob' box.
3 Use specific JVM arguments as you did in each child job.
Regards
Shong