Skip to main content

Official Support Articles

Search or browse our knowledge base to find answers to your questions ranging from account questions to troubleshooting error messages. The content is curated and updated by our global Support team

Announcements
NEW webinar Dec. 7th: 2023 Outlook, A Pivotal Year for Data Integration SIGN ME UP!

Qlik Replicate LOGSTREAM Timeout while waiting to get data from audit file

cancel
Showing results for 
Search instead for 
Did you mean: 
Pedro_Lopez
Support
Support

Qlik Replicate LOGSTREAM Timeout while waiting to get data from audit file

In some cases when Replicate can't read from the Staging Folder of a LogStream task for whatever reason (corrupted folder, lack of disk space etc), there could be difficulties to resume the LogStream task after the initial issue is solved.

You might see errors like the below when trying to resume from timestamp:

[UTILITIES ]E: Failed to write to audit file  <audit_folder directory>
[UTILITIES ]E: Timeout while waiting to get data from audit file [1002521] (at_audit_file.c:637)

[UTILITIES ]E: Error reading audit batch [1002509] (at_audit_file.c:679)

 

Environment

  • Qlik Replicate 2021.11 (not exclusive)
  • Linux RHEL / Windows 

 

Resolution

 

  1. Stop all LogStream and Replication tasks
  2. Kill the replicate sessions and process as the audit file is being locked by the process
  3. Rename the audit folder of the problematic audit file (the folder is the one described under the endpoint settings: "Staging Folder" in the Replicate UI)
  4. Resume LogStream task from timestamp (a few hours before the initial error), then resume replication tasks from the same timestamp 
  5. If not solved, a reload of the task will be needed

 

Cause 

Usually, the Replicate process (repctl) is locking the audit file that was being written/read while the issue occurred.

 

Comments
joseph_jbh
Contributor III
Contributor III

Hey @Pedro_Lopez , thanks for the info!
When I tried to follow the steps, the task was smart enough to look into the renamed folder:

00021812: 2022-09-20T14:13:58 [UTILITIES ]T: open audit file K:\Replicate\logstream\DCS_OUTBOUND\lspDCS_OUTBOUND\LOG_STREAM\audit_service\20210205123530927654_beforeCorruption\7012 for write (at_audit_writer.c:506)
00021812: 2022-09-20T14:13:58 [UTILITIES ]T: Reading audit file 'K:\Replicate\logstream\DCS_OUTBOUND\lspDCS_OUTBOUND\LOG_STREAM\audit_service\20210205123530927654_beforeCorruption\7012' with header version '1' (at_audit_file.c:399)

That's after a resume-by-timestamp. Any thoughts?

Sonja_Bauernfeind
Digital Support
Digital Support

Hello @joseph_jbh 

Have you attempted the reload of the task (last step if the resume does not work), rather than only to resume by time stamp?

All the best,
Sonja 

joseph_jbh
Contributor III
Contributor III

Hi Sonia - Thanks for replying.  I'm sure a Reload would work, even if I have to clean out the log_stream folder....But I'm following Pedro's tip as a way to avoid that.  Some of our log stream parents supply nearly 75 child tasks which would need to be reloaded too....

Perchance, have you guys seen this symptom on non-HA deployments of Replicate?  Ours is an HA deployment, using a Windows failover cluster, and shared storage.   I'm curious if this is contributing to the problem.

Sonja_Bauernfeind
Digital Support
Digital Support

Hello Joseph,

At this point, I would recommend sending that query over to our Qlik Replicate forum directly as it would require additional investigation. 

All the best,
Sonja 

joseph_jbh
Contributor III
Contributor III

Understood, thanks Sonja.

Version history
Last update:
‎2022-02-01 07:38 AM
Updated by: