I would suggest that you read all logs (windows and qv) into a qv app and searched then to the timestamps with errors to see which entries from windows and qv log are connected. Maybe you need some rounding-adjustments or clustering on the timestamp then I'm not sure if connected items would be really an absolutely identical timestamp.
For the qv logs you could read the diverse log-files from the qv server-folder, maybe you could also use a qvd-store from these logs from the governance-dashboard. For the windows logs you could use the QvEventLogConnectorElaborate.exe from QVX SDK QVX SDK Instructions and also perfmon could be useful Processing Windows Perfmon Logs using QlikView.
Before make sure that all logging within qv and windows are enabled with an appropriate log-level. It will need some efforts to build such app in the long-term it might return more than these efforts. Maybe some others have already created such app?
Further the most of the errors aren't really critical - it will be conflicts to start/end services, find printers and similar - critical are most often ressources issues if qv takes to much of RAM and CPU so that essentials windows server are going down. If the server runs stable there shouldn't be much reason to concern.
well, I guess I'll close this thread. I can build an app to check all the QlikView_Server logs - I already have an app reading a few of them to display the usage_structure of our QlikView_implementation.
I cannot read the Windows_event_logs on the server, I don't have access to those.
So I'm afraid there won't be much I can do about that in the future, just like it is now as I'm not sure the QlikView_serveer would report anything if the IT_guys notice a "critical error" of some kind - that is a rather grave malfunction I guess, so it would probably be an unforeseeable thing - and unforeseeable things are quite challenging to foresee in a program ;-)