Skip to main content
Announcements
See why Qlik is a Leader in the 2024 Gartner® Magic Quadrant™ for Analytics & BI Platforms. Download Now
cancel
Showing results for 
Search instead for 
Did you mean: 
chefreporter
Contributor II
Contributor II

Script Log is unable to be write

Good day everybody,

I have an error message in the management console that neither I nor our partner can use.

I have around 200 jobs running every night. 2 of these jobs do not generate a script log.

All QVW's are set so that they write a log with a timestamp.
The task log then says:

(09/01/2020 07:09:31 AM) Information: Writing documentLog to 😧 \ Qlikview \ Publisher \ 1 \ Log \ 20200901 \ 032233 - Create2_Test_PA \ DocumentLog.txt

(09/01/2020 7:09:31 AM) Warning: Copy Document Log failed. Exception: Access to the path 'D: \ Qlikview \ Source \ 50_Create \ Create2_Test_PA.qvw.2020_09_01_05_09_31.log' is denied.

(09/01/2020 07:09:31) Information: Reload finished successfully

 

Another job from the same directory delivers:

(09/01/2020 02:57:52) Information: Writing documentLog to 😧 \ Qlikview \ Publisher \ 1 \ Log \ 20200901 \ 023330 - Create2_BEW_FAL_PAT \ DocumentLog.txt

(09/01/2020 02:57:52) Information: Reload finished successfully

In the management console under Task / Status I see a yellow warning triangle in front of the job.
The job itself provides correct data.

Where is the problem? Does anyone have a good idea

Best regards

Claus Gittner

 

7 Replies
marcus_sommer

As fas as it didn't happened yet I suggest a restart of the services and also from the machine ... not to look for any strange issues which aren't a permanent problem ...

Also a look worth may any news/updates of your security tools (many years ago I had had such a case in which an antivirus-update led to to wrong error-messages within the qmc whereby the tasks itself were successfully).

In regard to the copy-error there might be any other accidentally access on the file or folder through the OS or any other tool which prevents the appropriated access rights. To find such things within the various log-files could be quite expensive therefore it may be easier to change the time-frame in which the task runs (at least as a test).

- Marcus

chefreporter
Contributor II
Contributor II
Author

I've been looking for the problem for a long time.

The server has since been restarted several times.

Several logs are written without any problems in the same period.

The error also occurs if the job in question is started manually and runs alone on the server.

Since the job processes a lot of script commands, the log could become too large. Is there a limit?



Claus Gittner


Brett_Bleess
Former Employee
Former Employee

There are no limits on things, I would suspect you have a temp file that is hung up/locked by another process such that we cannot write to the temp file is the actual underlying issue on this one.  You did not specify what version you are running though, and there have been some changes in things, so I am hesitant to say much further without that, as I have no way of knowing whether I am on the right track without the additional info.  That being said though, you may have hit on something related to the size issue, if you are running low on disk space on the root partition and the QDS App Data folder is pointed there, that might also cause the issue, so would check upon that too to be sure you have free space.  Keep in mind also that given the number of tasks you mentioned, depending upon resources you have available on the server, you may be causing an issue where the server is having to make use of Disk Page File at certain times too, and that could also be a trigger of the issue too.  About the best of which I can think with the info provided.  Do the logs write if you rerun the task at a different time?  I did just notice you have App Data on D partition, but temp files will likely still write to C partition I believe, so be sure to check C partition space, sorry I do not have anything better, the full task log would be very helpful...

Regards,
Brett

To help users find verified answers, please do not forget to use the "Accept as Solution" button on any post(s) that helped you resolve your problem or question.
I now work a compressed schedule, Tuesday, Wednesday and Thursday, so those will be the days I will reply to any follow-up posts.
chefreporter
Contributor II
Contributor II
Author

Our versions and storage space:
Server: 12.40.20300.0
Client: 12.40.20300.0, April 2019 SR3
Storage space: C = 27.5 GB, D = 303 GB

Then I let one of the QVWs run with a reduced scope.
In this QVW 115 QVS scripts are normally processed one after the other.
For the test, I only activated the first 10 QVS.
4 other big jobs are running at the same time.
The log file has been created as requested.
I will now gradually increase the number of QVS and see up to what size the log is created.

chefreporter
Contributor II
Contributor II
Author

Good Morning,

several attempts later I'm just as smart as before.
I have run my QVW with a different number of QVS.
It is neither a time problem (there are scripts that run longer) nor a size problem (2 logs are significantly larger).
I think I'll let our partner company report this to Qlik.

Thanks to all

Claus Gittner

marcus_sommer

It's not a direct answer to your issue but you mentioned that these tasks takes some time and produce rather large log-files. This are indicators that these bigger tasks could be splitted into several smaller ones. Such smaller tasks might be run more in parallel and/or in different time-frames and/or partially with incremental approaches because they are easier to develop and to maintain. Especially if it comes to any issues the troubleshooting will be easier.

Because of your release the following error should be not related to your issue but maybe it's worth to investigate the matter from this point of view, too:

https://community.qlik.com/t5/Qlik-Support-Updates-Blog/Reload-from-QVDs-failing-in-Qlik-Sense-April... 

If none of this is useful you will probably need to dive deeper within the OS/Qlik logs and maybe even to use tools like the process monitor to find the locking process.

- Marcus

chefreporter
Contributor II
Contributor II
Author

The problem has been recognized and solved so far.
I used the Process Monitor to monitor the suspicious jobs while they were running.
At the time the logs were supposed to be written, the virus scanner (GData) was suddenly active.
This was not the case with other jobs.
If the virus scanner was deactivated, the logs were written.
Finally, the logs were then sent to GData for assessment.
From there the confirmation came that the logs had been recognized as a virus.
Thanks again to the swarm intelligence for the help.

Claus Gittner