Qlik Community

Ask a Question

Qlik Sense App Development

Discussion board where members can learn more about Qlik Sense App Development and Usage.

Announcements
Do More with Qlik - for Beginners and Beyond, Topic: Qlik Replicate on January 21, 2PM EST. REGISTER NOW
cancel
Showing results for 
Search instead for 
Did you mean: 
Partner
Partner

Failed to open file in write mode for file - QS Server 2.0.6.0

I'm getting an intermittent problem during an incremental load.

Basically I load my new and updated records from a Database and concatenate it with the Historical records stored in a QVD file.

The problem happens when I store the updated set of records in the previous loaded historical QVD - this is pretty much the standard Incremental Load Process.

So I get this error:

"Failed to open file in write mode for file"

Again, this is an intermittent problem and I'm not able to reproduce it.

1 - Is there anyone else having this issue?

2 - Is there a way to avoid this problem?

I'm thinking this may be a Qlik Sense bug.

Thank you in advance,

Mark Costa

33 Replies
Partner
Partner

Is another resource hitting the QVDs you are trying to save? It seems like you are trying to store the QVD at the same moment someone is loading it into an App.

You may want to enable audit logging and verify the activity on the server during the issue times.

Contributor III
Contributor III

Hi Mark,

I'm having the same problem.  I'm trying to reload an App that saves a table to a QVD but it failed with the error message: Failed to open file in write mode for file.  I tried to delete the QVD file in Windows Explorer but it said: The action can't be completed because the file is open in Qlik Sense Engine Service.

There is one app that loads from this QVD but I'm the only person with access to it and it was not reloading each time this error occurs.

The error isn't intermittent so is a major obstacle.

Can anyone help please?

Contributor III
Contributor III

Hi Mark,

I do encounter this intermittent error  too.

I am loading in data and storing them as QVD in step 1 before dropping the table.

Then I load the QVD into the app in step 2.

Seems like Qlik is not closing the QVDs in time after storing them before the load step.

Can someone advise if my hypothesis correct?

Would it help if i introduce a time buffer or can we set a condition for load in QVDs after verifying that it is closed?

Thanks!

Creator
Creator

I am having same issue within QV 12. I have a staged load so no other qvw is using the QVD at this time. I am doing a simple reload/rewrite of the QVD that will be called in the next stage.

When i try to delete the qvd, I get a message that I "require permission from the computer's administrator to make changes to this file." I am an admin on the server though.

Partner
Partner

Got the same issue in QS 2.2.4.

In my case it looks like a system resources/threads problem.

I am running several processes at the same time (saving different QVDs for different clients) and they work well ~70% of the time.

But this problem started appearing after number of parallel processes reached 5.

4 Tasks completed, 1 failed. According to logs STORE QVD commands occurred within 1 second of each other...

Trying to find if this is the system limitation or this option could be configured.

Running the tasks separately works 100% of the time...

VK

Partner
Partner

FYI:

I've got a reply from support that it's a "known" problem. The QLIK-58841 bug is scheduled to be fixed in 3.1 release.

Regards,

Vlad

Partner
Partner

Thank you Vlad.

Do you have the full description of the QLIK-58841 bug?
Is there any official workaround?

Regards,

Mark Costa

Partner
Partner

Mark,

No, unfortunately I do not.

Here is the note from my support case:

___________________________________________________________________

 

Keith Harris (QlikTech)

 

I missed posted. It is identified as fixed in 3.1.1 which is typically in the fall. There should be a release or 2 in between addressing issues found in 3.0

 

Tuesday, July 12,2016 2:03 PM

 

Keith Harris (QlikTech)

 

When I reload an app with the following STORE INTO script (using a QVD file):

"""

T1:

Load

Date(RecNo()+39000) as Date,

RecNo() as Value1,

RecNo() + 1 as Value2,

RecNo() + 2 as Value3

AutoGenerate 1000;

store T1 into lib://MyAppConn/resultQVD.qvd (QVD);

drop table T1;

"""

I get the error message back:

"Script Error. Failed to open file in write mode for file qlibitem://appcontent/2E27B053-6F5C-4B0D-A481-CD890A55A65E/resultQVD.qvd"

The same script works for QVX files. See attached test (07_StoreIntoConnection) and script log.

___________________________________________________________________

They did not provide a full scenario, but it looks very similar to my case.

The workaround for me was to reduce the number of simultaneous tasks executed (down to 2).

Regards,

Vlad

Partner
Partner

Thank you Vlad.

I have been trying to search for this bug and contact Qlik but I was not able to find anything related to this bug so far.

Anyway, I have good news for us. I think I found a work around.

First, let me explain my theory on what is going on:

The Qlik Sense Tasks are not terminating the execution of the load script properly and the STORE command still running while the task was already terminated. That in some way, is locking the process. So the STORE command is not holding the tasks to be terminated.

What I did was to add this holding time in the load script by the following command, right after the STORE command:

DO
SLEEP 5000;

LET _fwMessage = QvdNoOfRecords ('lib://My Library\myqvdfile.qvd');

TRACE $(_fwMessage);

LOOP WHILE (LEN('$(_fwMessage)') = 0)

The QvdNoOfRecords will return NULL if the QVD file is opened and still loading with the data from your Load Script. When it is ready the code will proceed as normal and the task will be terminated.

So far I have tested that for more than 2,000 times loading more than 2,000 QVD files and about 1TB of data without any error - and loading 10 tasks at the same time.

I hope this help you. If you test this and still have errors, please just let us know.

Regards,

Mark Costa