Skip to main content
Announcements
Have questions about Qlik Connect? Join us live on April 10th, at 11 AM ET: SIGN UP NOW

Qlik Replicate: Outgoing stream is full. Forwarding events to target is postponed

No ratings
cancel
Showing results for 
Search instead for 
Did you mean: 
OritA
Support
Support

Qlik Replicate: Outgoing stream is full. Forwarding events to target is postponed

Last Update:

Mar 20, 2023 3:24:58 AM

Updated By:

Sonja_Bauernfeind

Created date:

Jun 18, 2021 8:04:35 AM

By default, Qlik Replicate keeps data for replication in the memory. However, in some cases, when replication data exceeds memory space, data for replication will be kept in swap files on disk. Below are the scenarios in which data for replication will be kept on disk:

  1. When target endpoint is temporarily not available  

When replicating very big transactions:

  1. If a transaction is too large to be kept in memory or
  2. If the transaction does not commit for a long time

In these cases, Qlik Replicate will offload the transaction changes and will store them on disk, saving them in files under the task sorter directory with the suffix .tswp. Every transaction will be saved in a separate file.

This way, Qlik Replicate can continue reading CDC changes from the source without the need to stop due to lack of memory.

Once the transaction is committed, the transaction changes will be sent by Qlik Replicate to the target endpoint via the outgoing stream queue. This is done by the Qlik Replicate sorter and once the target endpoint acknowledges it was received the transaction and committed it successfully to the target db, the sorter will remove the transaction from memory or from disk (if was saved on disk). 

The following error may be seen in the Replicate log: 

"Outgoing stream is full. Forwarding events to target is postponed. "

This problem can be caused when scenario 2 or 3 mentioned above took place, and this means that changes made by the source and processed by Qlik Replicate are too rapid for the target. As a result, the task may also show target latency in which case a further investigation and troubleshooting is required to find out where is the bottleneck. 

  To increase the outgoing stream buffer size (in case the problem is caused by reason 2 mentioned above) you should perform the following steps: 

  1. Stop the task.
  2. Export the task & and edit it
  3. In the task json locate the section:  common_settings

     Under this section add the following:
    "stream_buffers_number" : xx,
    "stream_buffer_size" : yy,​

    For example:
    "common_settings":{
    "stream_buffers_number" : 10,
    "stream_buffer_size" : 40,                                                                           
    "change_table_settings":     {

    (Note: in the example above 10 buffers with a buffer size of  40MB will be allocated)

    The default values are:

    "stream_buffers_number" : 3
    "stream_buffer_size" : 8 (MB)
  4. Save the json file and import the task back to your Replicate server.
  5. Resume the task

NOTES:

1. this stream buffer apply to all endpoints.

2. The default is 3 4MB buffers and should be sufficient for most situations. If you have very large lobs in a bust system is about the only time you want to manipulate. The max recommended settings are 5 1200MB buffers.
**Remember that the out going stream buffer full is not indicative of an issue with the buffers but usually an issue with the speed of the target applies.**

3. When Outgoing stream is full, this is more of target latency, and need to investigate more before adjusting the stream buffer.

 

Labels (2)
Comments
kutay_cilingiroglu
Partner - Contributor II
Partner - Contributor II

Hello,

Thanks for the article.

Our target apply performance has increased but we are still getting "Outgoing stream is full. Forwarding events to target is postponed" message and we are facing scenario 2 (24M + update statement in one transaction).

Can you give us some tricks about tuning these 2 parameters?

Many Thanks.

Dana_Baldwin
Support
Support

Hi @kutay_cilingiroglu 

Please check the target health & network health between Replicate and the target. If your handling latency is not sufficiently reduced, you may need to further increase stream_buffer_size to 100 or 200.

FYI, the memory consumed by these parameters can be determined as follows:

example:
"common_Settings": {
"stream_buffers_number": 10,
"stream_buffer_size": 50,

10x50= 500MB

Thanks,

Dana

Barb_Fill21
Support
Support

Just remember that memory usage is continuous. It is dedicated RAM usage and does NOT get freed. 

So you may want to use this sparingly or temporarily on certain tasks. ( not all tasks) until your issue is resolved. 

 

Sincerely,

Barb

 

dhina
Contributor III
Contributor III

Hello Team,

I have some questions on above problem scenario.

1. You have mentioned that if transaction does not commit for a long time, then I'll cause issue. So, which commit are you referring? From source should it commit, or target doesn't able to commit.

2. How our Qlik will take the Changes from source? only committed changes it will take and process (or) once the changes done in source, tool will collect and save it in Sortor and if commit happen for that particular data in source, then will it move to target?

3. If the problem is with target commit, what could be possible reasons and how it should be resolved? (In general)

4. If the buffer memory not added to the tasks when this issue occurs, would the task stop collecting the coming changes from source? since it's having out of memory issue in Sortor. 

Thanks, in advance!

Dhina

Sonja_Bauernfeind
Digital Support
Digital Support

Hello @dhina 

On your first question: The target.

Your second question, this is correct:  tool will collect and save it in Sortor and if commit happen for that particular data in source, then will it move to target

Your third question: Sometimes, it is hard to tell why the target is not committing for a long time. Maybe the target is highly loaded with work related to other tasks or even work not related to Qlik Replicate at all (such as someone reading large amounts of data from that database target). Each occurrence would need to be investigated accordingly.

Your forth question: No it will not stop collecting. It will keep collecting and if needed will start writing those to disk.

If you have any more detailed questions regarding this, I recommend posting them directly into our Qlik Replicate forum where our active support agents and userbase can assist you more readily.

All the best,
Sonja 

 

dhina
Contributor III
Contributor III

Hi @Sonja_Bauernfeind ,

Thanks for your response.

Quick follow up question of second question.

What if, commit is not happen on source for longer time, but the changes keep on happening, our tool will collect and store it sorter and waits for commit, and it'll breach the memory right? so, at the time what our tool will do? and will the source latency increase?

Thanks,

Dhina

OritA
Support
Support

Hi Dhina,

If commit does not happen on the source and the transaction become big than it will be saved in temporary swap file (.tswp) in the sorter directory. As indicated above it will be sent to the target ONLY after transaction is committed.

Regards,

Orit

 

Datateam
Contributor
Contributor

Hello Team,

Could it be a solution for the latency problem just to increase the number of Buffer Streams? or do they both have to be increased: size and number?
Is this also valid for Logstream tasks?

Thanks.-

Hugo

Dana_Baldwin
Support
Support

Hi @Datateam 

These settings can help with latency but mainly when LOB columns are involved with the task. The correct number and size is a trial and error process, you can try increasing size first and test.

Yes, these settings are relevant for log stream tasks - at least the child/replication task that writes to the ultimate target. I don't think it is relevant for the parent/staging task.

Hope this helps.

Dana

Datateam
Contributor
Contributor

It has been very enlightening

Thank you so much.

Hugo

Version history
Last update:
‎2023-03-20 03:24 AM
Updated by: