Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi!
Within a customer demo I have to copy a huge table from a MySQL RDS instance in AWS into Timescale Cloud (by using PostgreSQL target end point). To do this I have created a Qlik Replicate task which does also some data transformations and some data filtering.
I have copied the first 1 million entries successfully in about 17 seconds.
The source table contains about 8400 million rows which take currently about 1800 GB. Due to the more efficient target repository and to the filtering, I get a pretty high compression rate so I should have enough space in the target but the all pipeline might take the all weekend to run.
Any hints in which settings in Qlik Replicate I should make in order to run the pipeline smoothly?
Thanks,
Bernardo Di Chiara
You can try enabling parallel load by creating segmentation. Check this documentation page for details. I have successfully improved the load from Oracle endpoint by 5x be enabling parallel load.
However, your source, MySQL on RDS is not explicitly mention in the supported sources.
You can try enabling parallel load by creating segmentation. Check this documentation page for details. I have successfully improved the load from Oracle endpoint by 5x be enabling parallel load.
However, your source, MySQL on RDS is not explicitly mention in the supported sources.
Thanks Prabodh!