Skip to main content
Announcements
UPGRADE ADVISORY for Qlik Replicate 2024.5: Read More
cancel
Showing results for 
Search instead for 
Did you mean: 
HeleneExner
Contributor III
Contributor III

How to estimate size of transaction files

Dear community,

A major migration is planned in the source database. When performed on a test database, this migration produced 2.5 TB of redologs.
Can the transaction file size for Qlik Replicate be calculated from the size of the redo logs? Do the size of the redo logs match the potential size of the transaction files?

Many thanks in advance,

Helene

Labels (2)
2 Solutions

Accepted Solutions
Steve_Nguyen
Support
Support

@HeleneExner 

"migration all tasks were stopped " what migration ?

 

Disk space full sound like, it read a huge batch process and try to transfer to target , but target were slow, so Replicate write to Sorter file , causing disk full.

 

best to work with support or Professional Service team on what Migration are you doing.

 

 

Help users find answers! Don't forget to mark a solution that worked for you! If already marked, give it a thumbs up!

View solution in original post

Steve_Nguyen
Support
Support

from all the information, provided, best you work with our professional service team to see if this is the correct migration path.

Help users find answers! Don't forget to mark a solution that worked for you! If already marked, give it a thumbs up!

View solution in original post

13 Replies
Steve_Nguyen
Support
Support

@HeleneExner , what is your source and target, this will help?

also, if full load, you can break up the batch side , so not sure what the question could be. Replicate is not going to just take the whole 2.5TB and move it, Replicate would take chunk at a time moving the data.

Help users find answers! Don't forget to mark a solution that worked for you! If already marked, give it a thumbs up!
HeleneExner
Contributor III
Contributor III
Author

Hi Steven,

many thansk for your replay!

The source and target are Oracle 19c. 

I'm asking because this is what happened in the test environment:
before the migration all tasks were stopped. After the migration, when the tasks were resumed, it filled up disk space on the Qlik Server, making the Qlik console unreachable. This is the scenario that will soon take place in a productive environment. Now I have the possibility to increase disk space. I just need to know how big.

nice greetings,

Helene

Steve_Nguyen
Support
Support

@HeleneExner 

"migration all tasks were stopped " what migration ?

 

Disk space full sound like, it read a huge batch process and try to transfer to target , but target were slow, so Replicate write to Sorter file , causing disk full.

 

best to work with support or Professional Service team on what Migration are you doing.

 

 

Help users find answers! Don't forget to mark a solution that worked for you! If already marked, give it a thumbs up!
Heinvandenheuvel
Specialist III
Specialist III

>> it filled up disk space on the Qlik Server, making the Qlik console unreachable. 

As @Steve_Nguyen indicates, Replicate does not typically 'suck up' the whole Redo log but reads and reads until it sees a commited transaction on source and commits that 'chunk'. Now if it can read much faster than it can store on target then Replicate will aggregate more and more, possibly too much - after some time.

Where did the files go and how much and how quickly ? Gigabytes or Terabytes? Hours or minutes?  On Windows you may want to use a tool like 'TreeSizeFree' to get a quick resolution?

Where they perhaps Replicate Log files in the log directory suggesting many errors and a high LOGLEVEL which may be fine for DEV but not expected in PROD. Where they in the per-task sorter area? You mention 'tasks' - plural. Are they all reading the same source? Have you considered a LOGSTREAM task to have them share the source REDO reading?  Can you resume one task at a time?

- Is there a single or just a few very large transactions doing the migration? Replicate Tasks would have to stage those in the Sorter directory until a 'Commit' is  seen. No choice. You may need to redesign the upgrade to commit in smaller increments, or run fewer tasks such that they do not all stage at the same time.

- Is your replicate DATA directory on the 'C:' drive in PROD (or DEV)? It shouldn't be! Move it.

- Did you implement "High Disk Space Utilization Threshold" in Prod? You should! Server --> Resource Control --> 
Disk Space

- With Terabytes of changes to be read, and processed, could it be quicker and easier to just reload the targets?

Good luck,

Hein

 

 

HeleneExner
Contributor III
Contributor III
Author

Hi Hein,

many thanks for your replay!

The architecture looks like this:

- Source is the Oracle database on Linux Server
- Qlik Server is on Linux (no access for me) Disk Space 500MB
- Ttarget is Kafka streaming platform and partioally Oracle DB
By migration I mean the following process: on the source DB: this is an application that gets a new module. A lot of existing data is also updated so that the new data can be integrated into the application. This creates many updates on the source database. According to REDO, that's 2.5 TB.
The data is loaded from the source using several tasks. All tasks read from the same source but write to 2 different targets - Oracle DB and Kafka streaming platform.
Yes, I can continue the tasks one by one , but that would increase the processing time significantly. I have very limited time for this process.
As for migration... I have absolutely no control over this process and can't control  the transactions. This is major release and is done by application owner.
About Resource control for Qlik: yes, warning by 70% and stop the tasks at 80%.
Reloading from target is easy and safe. But that would take days. Unfortunately I don't have that much time.
For this reason, I wanted to use the size of the REDOs to calculate the need for disk space and possibly increase it.

Many thanks and best regrads,

Helene

Steve_Nguyen
Support
Support

@HeleneExner 

"About Resource control for Qlik: yes, warning by 70% and stop the tasks at 80%. " does the repsrv log show resource error or task fail on another issue ?

this may need further troubleshooting.

 

is the REDO log really 2.5TB, check with your DBA source team ,, what is the average REDO log and what is the largest ,, you can enable performance and source_capture ,, trace and see how fast we read.

Help users find answers! Don't forget to mark a solution that worked for you! If already marked, give it a thumbs up!
HeleneExner
Contributor III
Contributor III
Author

Hi Steven_Nguyen,

many thanks for your replay!

Yes, the REDO logs are really 2.5TB. This is a very special and unique situation on the source database. I just want to know if it is possible to calculate the Qlik transaction files from the size of the REDO logs.

Many thanks and best regards,

Helene

 

Steve_Nguyen
Support
Support

from all the information, provided, best you work with our professional service team to see if this is the correct migration path.

Help users find answers! Don't forget to mark a solution that worked for you! If already marked, give it a thumbs up!
HeleneExner
Contributor III
Contributor III
Author

Hi Steven,

thank you very much 🙂 will do this.

Beste Reagds,

Helene