Skip to main content
Announcements
Qlik Connect 2024! Seize endless possibilities! LEARN MORE
cancel
Showing results for 
Search instead for 
Did you mean: 
fj40wdh
Contributor III
Contributor III

Logstream retention

I have two questions.

1. What's the maximum rollover size for the logstream. Ours seems to be 1G (1000000000). We use more than that in a day.

2. If I have multiple tasks using the same logstream source, will one task remove files before another task is finished?

 

this is related to the Log retention period (min):

We also have seen this in similar cases when using the same CDC changes table one task may delete rows that are not yet processed by the second task.

This will also occur if one task was stopped and the other is still running, there is no relation to stopping the DB or SAP components.

If the two Tasks do not share the same table set, it is recommended to use a different source endpoint that uses different CDC changes tables

Labels (1)
2 Solutions

Accepted Solutions
SachinB
Support
Support

Hello @fj40wdh ,

Thank you for reaching out to the Qlik community!

I need to bring to your attention two critical conditions regarding log file rollover. Please note that either one of these conditions being met will trigger the rollover process:

  • Rollover: Specify when to start writing to a new staging file:
    • Roll over file after (minutes): Number of minutes after which a new staging file should be started. Default is 120 minutes. Maximum permitted time is 10,080 minutes (one week).
    • Roll over files larger than (MB): Size of the file after which a new staging file should be started. Default is 500 MB. Maximum permitted size is 100,000 MB.

Also log retention policy behaves the same way. Either one of these conditions being met will trigger the retention process:

  • Retention: Specify when the staging file should be deleted (note that active files will not be deleted):
    • Delete staging files after (hours): Select the check box and specify the maximum time before a file is deleted. Default is 48 hours. Maximum permitted time is 10,000 hours.
    • Delete oldest files when the total size of all staging files exceeds (MB): The maximum size that you want to allocate for the staging folder. If the specified size is reached, Replicate will start deleting files from the oldest to the newest until the total size falls below the upper limit.

      Default is 100,000 MB. The minimum size should be at lease twice the defined rollover size. Maximum permitted size is 1000 TB.

Note :

I had like to draw your attention to a crucial scenario regarding reading from the source, especially when dealing with child tasks and their respective latencies.

If all child tasks are operating without latency, they are likely reading the latest logs available in the source. However, if even one child task experiences significant latency, particularly close to the hour of retention policy, it may lead to reading problems from the source.

It's essential to monitor and address any latency issues promptly to prevent potential disruptions in reading from the source.

Hope above information is helpful,

Regards,

Sachin B



 

  

 

View solution in original post

SushilKumar
Support
Support

Hello team,

 

If our response has been helpful, please consider clicking "Accept as Solution". This will assist other users in easily finding the answer.

 

Regards,

Sushil Kumar

View solution in original post

4 Replies
SachinB
Support
Support

Hello @fj40wdh ,

Thank you for reaching out to the Qlik community!

I need to bring to your attention two critical conditions regarding log file rollover. Please note that either one of these conditions being met will trigger the rollover process:

  • Rollover: Specify when to start writing to a new staging file:
    • Roll over file after (minutes): Number of minutes after which a new staging file should be started. Default is 120 minutes. Maximum permitted time is 10,080 minutes (one week).
    • Roll over files larger than (MB): Size of the file after which a new staging file should be started. Default is 500 MB. Maximum permitted size is 100,000 MB.

Also log retention policy behaves the same way. Either one of these conditions being met will trigger the retention process:

  • Retention: Specify when the staging file should be deleted (note that active files will not be deleted):
    • Delete staging files after (hours): Select the check box and specify the maximum time before a file is deleted. Default is 48 hours. Maximum permitted time is 10,000 hours.
    • Delete oldest files when the total size of all staging files exceeds (MB): The maximum size that you want to allocate for the staging folder. If the specified size is reached, Replicate will start deleting files from the oldest to the newest until the total size falls below the upper limit.

      Default is 100,000 MB. The minimum size should be at lease twice the defined rollover size. Maximum permitted size is 1000 TB.

Note :

I had like to draw your attention to a crucial scenario regarding reading from the source, especially when dealing with child tasks and their respective latencies.

If all child tasks are operating without latency, they are likely reading the latest logs available in the source. However, if even one child task experiences significant latency, particularly close to the hour of retention policy, it may lead to reading problems from the source.

It's essential to monitor and address any latency issues promptly to prevent potential disruptions in reading from the source.

Hope above information is helpful,

Regards,

Sachin B



 

  

 

SushilKumar
Support
Support

Hello team,

 

If our response has been helpful, please consider clicking "Accept as Solution". This will assist other users in easily finding the answer.

 

Regards,

Sushil Kumar

fj40wdh
Contributor III
Contributor III
Author

Thank you for the reply, I had not thought of latency being a problem.

I still cannot find a way to set the retention size above 1G, as you can see the max is 9 zeros.

We can use more than that in less than an hour (busy times). This means I'm writing over old files every hour.

Documentation states it should be in the Tera bytes

 
(0 - 1,000,000,000)
fj40wdh
Contributor III
Contributor III
Author

Oh... wait, I see it now.  MB so that's 1 000 000 000 000 000

Thanks for your help.