Skip to main content
Announcements
UPGRADE ADVISORY for Qlik Replicate 2024.5: Read More
cancel
Showing results for 
Search instead for 
Did you mean: 
DileepK
Contributor
Contributor

Attunity Production CDC task got failed.

We could see all CDC task for failed in Attunity production server dca-app-1660 which is in cluster. We have manually failover the server to dca-app-1659. After failover the task got resumed. Need your help to identify what went wrong in dca-app-1660. Below is the one of the error for reference. we will upload the log file to more information. Please check and update. 

Cannot write into filestream file E:\Attunity\Data\tasks\SAP_CRM_To_Hadoop\data_files\2000061\CDC-0-20221121-202434634044_0.csv [720170]  (filestream.c:487)
00018908: 2022-11-23T05:47:57 [TARGET_APPLY    ]E:  Failed to flush file 'E:\Attunity\Data\tasks\SAP_CRM_To_Hadoop\data_files\2000061\CDC-0-20221121-202434634044_0.csv'. [720170]  (hadoop_utils.c:3779) 

Labels (1)
3 Replies
Dana_Baldwin
Support
Support

Hi @DileepK 

I see you have also opened support case# 61252, please follow up with us there.

Thanks,

Dana

Heinvandenheuvel
Specialist III
Specialist III

Please follow up through your support call as @Dana_Baldwin suggest.

In the mean time though, for the benefit of other running into this topic, can you explain how device 'E:' is configured. Is it accessible from all cluster members? is it managed as a cluster member resource where it was avabale to  dca-app-1660 at first and after failover is it available to  dca-app-1659?

Did it ever work, or is this a first test (in prod ??)

Can you 'see' (explorer) directory  E:\Attunity\Data\tasks\SAP_CRM_To_Hadoop\data_files from dca-app-1659  ??

It seems most likely that the Replicate data drive E: is improperly defined.

Good luck, 

Hein.

 

 

SwathiPulagam
Support
Support

Hi @DileepK ,

 

Next time for any planned failover try to stop the tasks in Advance. Because if the tasks are uninterruptedly stopped then we expect these error messages.

Thanks,

Swathi