Skip to main content
Announcements
Introducing Qlik Answers: A plug-and-play, Generative AI powered RAG solution. READ ALL ABOUT IT!
cancel
Showing results for 
Search instead for 
Did you mean: 
Anonymous
Not applicable

tMysqlBulkExec

Hi,
I have these huge positional files (21.8GB) that I need to load into MySQL database. I'm using TOSDI for this and I find out about tMysqlBulkExec thanks to the advice from this forum. I was able to load a small test file into my table but only the first column was loaded the rest was 0's and nulls. I discovered that my positional file I created in the repository wa snot being use. Instead the properties in the advance tab were use so the job is separating the fields by ";". Just to complete the test I converted the positional file into a delimited file and executed the tMysqlBulkExec again and it worked like a charm.
I already check the documentation but I can not find where it says that only delimited files can be use with tMysqlBulkExec. My question is: Do I have to add a step into my job that will delimited the positional file in order to use the bulk load?
Thanks,
jjsai
Labels (5)
7 Replies
Anonymous
Not applicable
Author

Hi
You'd better use tMysqlOutputBulkExec instead of tMysqlBulkExec.
Regards,
Pedro
Anonymous
Not applicable
Author

Thank you Pedro.
I did as you suggested and it seems to be working. What I did was I created a job with two components: tFileInputPositional and tMysqlOutputBulkExec. Is this what you meant? or is there other way to accomplish this?
How can I change the commit for these job? I mean how can I set it to commit every 1Million rows?
It just loaded the first file, inserting 9,811,613 rows in 1 hr 22 mins on a MacBook pro i7 with 4GB
Thanks,
jjsai
Anonymous
Not applicable
Author

I forgot to mention in my previous post that I check the documentation, in particular TOS Components 3.x because was the only version I was able to find. In that documentation there is an explanation for a filed named "Commit every" in the basic settings:
Commit every
Number of rows to be completed before commiting batches of rows together into the DB. This option ensures transaction quality (but not rollback) and above all better performance on executions.
Obviously this document is outdated but there was a field for the commit...
Thanks,
Anonymous
Not applicable
Author

Hi
Yes. The job should be like this.
tFileInputPositional --main-->tMysqlOutputBulkExec
We can't find 'commit' option now, but you might try 'Custom the flush buffer size' for reducing the cost of memory.
Regards,
Pedro
Anonymous
Not applicable
Author

Thank you so much for the reply! Do you mind elaborating on that? (Custom the flush buffer size) I meant just one example to get the idea of how that will "replace" the commit feature.
Thanks,
jjsai
Anonymous
Not applicable
Author

Hi
The component tMysqlOutputBulkExec executes the command 'LOAD DATA LOCAL INFILE' for bulk loading.
There is not any option which can 'replace' the commit feature on this component now.
Maybe you can try to increase the arguments of JVM for better performance.
Regards,
Pedro
Anonymous
Not applicable
Author

Hi,
I'm not that advance with this tools yet but I will research about changing the JVM arguments.
Thanks,
jjsai