Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello everyone,
I set up a replication pipeline with PostgreSQL CDC, using a test table and a 6-hour schedule. Everything worked fine for the first three runs, during which data changes were detected and QVDs were historized. After that, the PostgreSQL log reported the following error:
Could not send data to client: Connection reset by peer
There are no complex firewalls between Qlik and PostgreSQL.
See the attached file for one Qlik log reporting the error: Termination signal intercepted
Has anyone else had the same issue?
Thanks
Marco
Hello @marcocim ,
It appears that a scheduler or custom logic was configured to stop the task, as indicated in line #22:
2025-08-04T04:00:35 [TASK_MANAGER ]I: Task will be stopped on Mon, 04 Aug 2025 04:02:32 GMT (commit time) (replicationtask.c:1954)
Later, in lines #101 and #102, we can see that the task was stopped as scheduled:
2025-08-04T04:02:32 [TASK_MANAGER ]I: Stop task at commit time was requested. The CDC source will be stopped. (replicationtask.c:3447)
2025-08-04T04:02:32 [SOURCE_CAPTURE ]I: Termination signal intercepted (postgres_endpoint_wal_engine.c:639)
The message “Termination signal intercepted” is informational and expected in this context—it confirms the task stopped as planned.
However, the log does not contain the error "Could not send data to client: Connection reset by peer".You may want to verify the PostgreSQL server's configuration, particularly the wal_sender_timeout parameter, which could cause disconnections under certain conditions.
Additionally, consider increasing the logging level in Qlik Replicate (QCDI) to capture more granular details that might help in pinpointing the root cause.
Hope this helps.
John.
Hi John,
I'll forward your suggestions to the Postgre DBA.
Sorry, but I can't find the option to increase the logging level in QCDI.
Thanks for your support.
Marco
Hello @marcocim ,
Thanks for the update.
You may follow the below steps:
1. Locate the Task Name
Go to the Tasks tab.
Click on the ... (ellipsis) button next to the relevant task.
Select View task logs.
2. Re-create the issue and download the task log files.
Feel free to let us know if you need any additional assistance.
Good luck,
John.
Hi John,
this morning we perfomed the following steps:
1) we changed the wal_sender_timeout parameter from 60 seconds (default) to 10 minutes
2) we recreated the replication project, setting the schedule to 1 hour
3) After running prepare on the transfer task, the Postgre DBA verified that the replication_slot had been recreated
4) We manually ran the qlik pipeline, and the qvd was updated with the initialization of the content
5) We modified the contents of the Postgre table and manually re-ran the qlik pipeline, and the qvd was updated without errors. Furthermore, the Postgre DBA verified that the wals are being processed correctly, and a new one is created with each processing (according to the DBA, this is the desired behavior on the Postgre side).
6) After 1 hour, the first automatic schedule ran without errors (there was no data change).
7) We are now monitoring the next schedules with data changes and new logging level (as you indicated)
I'll keep you updated
Thanks
Marco
Hi @john_wang ,
now we have this situation:
There were more WALs accumulated since 2:30 PM, about 20; the 3:00 PM and 4:00 PM schedules had only processed 3 wals.
I ran two manual runs at 4:45 PM, which processed 1 WAL and the remaining 16, respectively, updating the QVD with the data changes.
Is the number of WALs processed parameterizable, on the Qlik or Postgres side, or is it managed internally?
Thanks
Marco
Hello @marcocim ,
Thanks for the update.
Qlik Replicate uses a logical replication slot to capture changes from PostgreSQL Write-Ahead Logs (WAL). Replicate does not manage the WAL files directly; instead, PostgreSQL handles WALs internally based on the replication slot’s state.
Hope this helps.
John.
Hi @john_wang ,
unfortunately at 00:00 UTC, PostgreSQL log report, for the first time from yesterday at 14:00, the error : "could not send data to client: Connection reset by peer"
On the Qlik side i receive no error, all schedules completed correctly.
PostgreSQL dba tells me that since August 6 at 9pm, qlik replicate has requested the same WAL. (see the attached file)
At 10:47 UTC we made some updates on the data, last two run doesn't detect data changes (i attached two last qlik replicate log files) and on PostgreSQL side we do not have the error : "could not send data to client: Connection reset by peer"
Final saved task state, seems not change:
00000488: 2025-08-07T10:27:50 [SORTER ]I: Final saved task state. Stream position 00000835/B5E7D4A8.1512278.00000835/EA1A4470, Source id 8847507, next Target id 73, confirmed Target id 69, last source timestamp 1754498380019748 (sorter.c:772)
00000489: 2025-08-07T09:36:30 [SORTER ]I: Final saved task state. Stream position 00000835/B5E7D4A8.1512278.00000835/EA1A4470, Source id 8847507, next Target id 73, confirmed Target id 69, last source timestamp 1754498380019748 (sorter.c:772)
00000521: 2025-08-07T09:02:08 [SORTER ]I: Final saved task state. Stream position 00000835/B5E7D4A8.1512278.00000835/EA1A4470, Source id 8847507, next Target id 73, confirmed Target id 69, last source timestamp 1754498380019748 (sorter.c:772)
I'd appreciate any ideas you might have on how to investigate this.
Thanks
Marco
Hello @marcocim ,
I recommend opening a support ticket and attaching the necessary information there. Our support team will be happy to assist you further.
Please note, do not attach task log files here, as they may contain sensitive information.
Regards,
John.
Hi @john_wang ,
i opened a support ticket ten days ago, we are investigating but we can't understand why after a few hours of operation, Qlik Replicate stops processing new wals, looping on old wals.
Nothing appears from the Qlik logs set to Trace.
Thank you for your support
Marco