Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 
marksouzacosta
Partner - Creator II
Partner - Creator II

Failed to open file in write mode for file - QS Server 2.0.6.0

I'm getting an intermittent problem during an incremental load.

Basically I load my new and updated records from a Database and concatenate it with the Historical records stored in a QVD file.

The problem happens when I store the updated set of records in the previous loaded historical QVD - this is pretty much the standard Incremental Load Process.

So I get this error:

"Failed to open file in write mode for file"

Again, this is an intermittent problem and I'm not able to reproduce it.

1 - Is there anyone else having this issue?

2 - Is there a way to avoid this problem?

I'm thinking this may be a Qlik Sense bug.

Thank you in advance,

Mark Costa

Read more at Data Voyagers - datavoyagers.net
33 Replies
vlad_komarov
Partner - Specialist III
Partner - Specialist III

Mark,

I've posted a reply in my post about similar problem.

Looks like you are correct. I will keep testing this case and will escalate these results to Qlik support as well.

Thank you for your suggestion!

Regards,

Vlad

Greg_Williams
Employee
Employee

You've verified no conflicting Tasks hitting the QVD at same time. QVD file has closed completely. Introduced the Exit Script; statement at end of script. Any write interference from another OS process? Is file locked by another service? Do you have appropriate security permissions to file? How many records are you updating? Are you able to load data from QVD in another app? Has anyone accessed the qlik app creating the qvd(s) other than you? Version of software is using the most recent software release (is this possible)?

-gw

marksouzacosta
Partner - Creator II
Partner - Creator II
Author

Hi Greg, following my answers:

  1. You've verified no conflicting Tasks hitting the QVD at same time.
    MC: We have an isolated QS Server just to do the ETL process and we have one task per QVD. There is no other process loading QVDs or hitting the same QVD at the same time. One thing that we have verified was if there are external QS tools working with the QVD files such as Antivirus, backup or any kind of mirroring process. Anything like that was found.
  2. QVD file has closed completely.
    MC: This problem is intermittent and this is important to keep in mind. Sometimes the QVD files just get locked by QS Engine Service so we cannot do any IO operation with the QVD file (move, delete, overwrite, rename or even load using a load script). I'm not sure about the state of the QVD file at this point but the only way that we found to release them was restarting the QS Engine Service.
  3. Introduced the Exit Script; statement at end of script.
    MC: I haven't tried to add the Exit Script at the end of the script but I think it is worthy to try it. In any case, this should not be necessary - as far as I know.
  4. Any write interference from another OS process?
    MC: No. We have one task per QVD file.
  5. Is file locked by another service?
    MC: We haven't found anything doing that but it still a possibility.
  6. Do you have appropriate security permissions to file?
    MC: Yes. We run the tasks under admin permission that have full access to the QVD files and folders.
  7. How many records are you updating?
    MC: We have a wide variety of QVD files, from 3 records to 14 billion records. I don't recall this problem happening on small QVD files but the problems start on QVD files greater than 300MB (not sure how many records).
  8. Are you able to load data from QVD in another app?
    MC: Yes. When the QVD file is not locked by this problem, we can normally load the QVDs.
  9. Has anyone accessed the qlik app creating the qvd(s) other than you?
    MC: No.
  10. Version of software is using the most recent software release (is this possible)?
    MC: I have tested the load process on 2.0.6.0 and now on 2.0.9.0. We should upgrade the server to the current release anytime soon.

I have submitted a ticket to Qlik months ago regarding this problem but Qlik was not able to reproduce it. Actually because of the random nature of this problem I was almost unable to show to Qlik this problem during a support call. At the end we were not able to identify the source of this problem.

Thank you,

Mark Costa

Read more at Data Voyagers - datavoyagers.net
vlad_komarov
Partner - Specialist III
Partner - Specialist III

Greg,

Just to add to Mark's comments above:

I am experiencing the same issue on 2.2.4 release (Re: QVD building scripts are failing in random order). Did not upgrade to 3.0.1 yet, but based on Qlik's support response, this (or similar) issue will be resolved no earlier than 3.1.1 release....

Regards,

Vlad

simsondevadoss
Partner - Creator III
Partner - Creator III

Hi All,

We are facing same issue " Failed to open file in write mode " while reloading the app in our qlikview server/desktop.

We are using qlikview 12 Sr4. Is there any workaround for this error? Kindly help.

Regards,

Simson

vlad_komarov
Partner - Specialist III
Partner - Specialist III

Simson,

This is the result (most likely) of the fact that QV 12+ and QS 2.2+ are sharing the same engine...

Looks like they are sharing it bugs too... 🙂

Based on my research there are actually 2 issues that causing this problem:

1. QVD file is not getting closed after the STORE INTO procedure is completed in the script

2. Engine is not handling simultaneous STORE INTO processes well.


For issue N1: mark.costa suggested a solution (see his note from Aug 2, 2016 5:43 PM above).

I've created a subroutine that runs after EVERY QVD STORE call and it prevents the script to continue until the file is actually closed. I've noticed that few times in my case the file was closed 10 (!!) cycles (50 seconds) after the Qlik Script has completed the STORE command.... 

For issue N2: You have to rearrange your tasks (until the bug will be resolved, at least). I was running 10 simultaneous QVD generating tasks. 3-4 of them were failing 100% of time (until I've added solution for N1 above). Now 1-2 tasks are failing 30% of a time if I am running all 10 tasks simultaneously. And reducing the number of concurrent tasks to 5 reduced the failure percentage to almost 0%.


Hope it helps...

Regards,

Vlad

marksouzacosta
Partner - Creator II
Partner - Creator II
Author

Hi Simson,

Try to add the code that I have created earlier in this topic.
Please let us know if that helped you too.

"

....

What I did was to add this holding time in the load script by the following command, right after the STORE command:

DO
SLEEP 5000;

LET _fwMessage = QvdNoOfRecords ('lib://My Library\myqvdfile.qvd');

TRACE $(_fwMessage);

LOOP WHILE (LEN('$(_fwMessage)') = 0)

"

You may have to replace the 'lib://My Library\myqvdfile.qvd' by your QVD full path.

Regards,

Mark Costa

Read more at Data Voyagers - datavoyagers.net
vlad_komarov
Partner - Specialist III
Partner - Specialist III

Mark,

I am still seeing some issues even after applying your solution. I've posted the reply with some details ~30 mins ago, but it's "being moderated" right now... 

Your code improved my situation significantly, but it looks like other issues are still exist.

Regards,

VK

marksouzacosta
Partner - Creator II
Partner - Creator II
Author

Great, thank you for let me know. I will uncheck my answer as correct.

Mine still working 100% of time - about 400 tasks running daily.

I will take a look on your answer asap.

Regards,

Mark Costa

Read more at Data Voyagers - datavoyagers.net
vlad_komarov
Partner - Specialist III
Partner - Specialist III

Thanks, the earlier reply just appeared in the posting above...

My problem #2 is (probably) a result of multiple issues.... My QVD generators are storing files with number of records from 400,000 to 30,000,000. I've noticed that running all 10 of them together (if they are starting at the same time) frequently causing failure issues. I was blaming the lack of system resources (since CPU load and memory usage is getting closer to 100% during these loads), but I've noticed that if I am starting the 5 biggest loads separately (when the CPU and memory are still reaching 100%), the failures are very rare, comparing with starting all 10 together....

So, I am still researching the problem....

Will keep you updated.

VK