It is finally here: The first public iteration of the Log Analysis app. Built with love by Customer First and Support.
"With great power comes great responsibility."
Before you get started, a few notes from the author(s):
- It is a work in progress. Since it is primarily used by Support Engineers and other technical staff, usability is not the first priority. Don't judge.
- It is not a Monitoring app. It will scan through every single log file that matches the script criteria and this may be very intensive in a production scenario. The process may also take several hours, depending on how much historical data you load in. Make sure you have enough RAM 🙂
- Not optimised, still very powerful. Feel free to make it faster for your usecase.
- Do not trust chart labels; look at the math/expression if unsure. Most of the chart titles make sense, but some of them won't. This will improve in the future.
- MOD IT! If it doesn't do something you need, build it, then tell us about it! We can add it in.
- Send us your feedback/scenarios!
Environment
Qlik Sense Enterprise on Windows (all modern versions post-Nov 2019)
How to use the app:
- Go to the QMC and download a LogCollector archive or grab one with the LogCollector tool
- Unzip the archive in a location visible to your user profile
- Download the attached QVF file
- Import/open it in Qlik Sense
- Go to "Data Load Editor" and edit the existing "Logs" folder connection, and point to the extracted Log Collector archive path
- If you are using a Qlik Sense server, remember to change the Data Connection name back to default "Logs". Editing via Hub will add your username to the data connection when saved.
- Go to the "Initialize" script section and configure:
- Your desired date range or days to load
- Whether you want the data stored in a QVD
- Which Service logs to load (Repository, Engine, Proxy and Scheduler services are built-in right now, adding other Qlik Sense Enterprise services may cause data load errors).
- LOAD the data!
My workflow:
- I'm looking for a specific point in time where a problem was registered
- I use the time-based bar charts to find problem areas, get a general sense of workload over time
- I use the same time-based charts to narrow in on the problem timestamp
- Use the different dimensions to zoom in and out of time periods, down to a per-call granularity
- Log Details sheets to inspect activity between services and filter until the failure/error is captured
- Create and customise new charts to reveal interesting data points
- Bookmarks for everything!
Notable Sheets & requirements:
- Anything "Thread"-related for analysing Repository Service API call performance, which touches all aspects of the user and governance experience
- Requirement: Repository Trace Performance logs in DEBUG level. Otherwise, some objects may be empty or broken.
- Commands: great for visualizing Repository operations and trends between objects, users, and requests
- Transactions: Repository Service API call performance analysis.
- Requirement: Repository Trace Performance logs in DEBUG level. Otherwise, some objects may be empty or broken.
- Task Transactions: very powerful task scheduling analysis with time-based filters for exclusion.
- Log Details sheets: excellent filtering and searching through massive amounts of logs.
- Repo + Engine data: resource consumption and Thread charts for Repository and Engine services, great for correlating workloads.
*It is best used in an isolated environment or via Qlik Sense Desktop. It can be very RAM and CPU intensive.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
Related Content
Optimizing Performance for Qlik Sense Enterprise - Qlik Community - 1858594