Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
I have two questions.
1. What's the maximum rollover size for the logstream. Ours seems to be 1G (1000000000). We use more than that in a day.
2. If I have multiple tasks using the same logstream source, will one task remove files before another task is finished?
this is related to the Log retention period (min):
We also have seen this in similar cases when using the same CDC changes table one task may delete rows that are not yet processed by the second task.
This will also occur if one task was stopped and the other is still running, there is no relation to stopping the DB or SAP components.
If the two Tasks do not share the same table set, it is recommended to use a different source endpoint that uses different CDC changes tables
Hello @fj40wdh ,
Thank you for reaching out to the Qlik community!
I need to bring to your attention two critical conditions regarding log file rollover. Please note that either one of these conditions being met will trigger the rollover process:
Also log retention policy behaves the same way. Either one of these conditions being met will trigger the retention process:
Delete oldest files when the total size of all staging files exceeds (MB): The maximum size that you want to allocate for the staging folder. If the specified size is reached, Replicate will start deleting files from the oldest to the newest until the total size falls below the upper limit.
Default is 100,000 MB. The minimum size should be at lease twice the defined rollover size. Maximum permitted size is 1000 TB.
Note :
I had like to draw your attention to a crucial scenario regarding reading from the source, especially when dealing with child tasks and their respective latencies.
If all child tasks are operating without latency, they are likely reading the latest logs available in the source. However, if even one child task experiences significant latency, particularly close to the hour of retention policy, it may lead to reading problems from the source.
It's essential to monitor and address any latency issues promptly to prevent potential disruptions in reading from the source.
Hope above information is helpful,
Regards,
Sachin B
Hello team,
If our response has been helpful, please consider clicking "Accept as Solution". This will assist other users in easily finding the answer.
Regards,
Sushil Kumar
Hello @fj40wdh ,
Thank you for reaching out to the Qlik community!
I need to bring to your attention two critical conditions regarding log file rollover. Please note that either one of these conditions being met will trigger the rollover process:
Also log retention policy behaves the same way. Either one of these conditions being met will trigger the retention process:
Delete oldest files when the total size of all staging files exceeds (MB): The maximum size that you want to allocate for the staging folder. If the specified size is reached, Replicate will start deleting files from the oldest to the newest until the total size falls below the upper limit.
Default is 100,000 MB. The minimum size should be at lease twice the defined rollover size. Maximum permitted size is 1000 TB.
Note :
I had like to draw your attention to a crucial scenario regarding reading from the source, especially when dealing with child tasks and their respective latencies.
If all child tasks are operating without latency, they are likely reading the latest logs available in the source. However, if even one child task experiences significant latency, particularly close to the hour of retention policy, it may lead to reading problems from the source.
It's essential to monitor and address any latency issues promptly to prevent potential disruptions in reading from the source.
Hope above information is helpful,
Regards,
Sachin B
Hello team,
If our response has been helpful, please consider clicking "Accept as Solution". This will assist other users in easily finding the answer.
Regards,
Sushil Kumar
Thank you for the reply, I had not thought of latency being a problem.
I still cannot find a way to set the retention size above 1G, as you can see the max is 9 zeros.
We can use more than that in less than an hour (busy times). This means I'm writing over old files every hour.
Documentation states it should be in the Tera bytes
Oh... wait, I see it now. MB so that's 1 000 000 000 000 000
Thanks for your help.
Dear All,
I have modified retention property delete files after 720 hours (30 days) in a Production TASK. is that required TASK stop and resume to reflect the modified Value.