Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Qlik Sense data files (.QVF) may contain sensitive organizational data. As such, even though Qlik Technical Support has a very strict Non-Disclosure Agreement, it may be desired to scrambled specific data before submitting apps Qlik Sense support for investigation.
For the currently available method in Qlik Sense Enterprise for Windows, see Scramble Sensitive Data In Qlik Sense Enterprise for Windows.
Data can be scrambled in three ways:
Before providing the scrambled application to Qlik Support, please check that you can still reproduce the issue within the scrambled application.
The attached automation_scramble_Support.json automation creates a copy of your app before scrambling the data.
To scramble your data using the automation:
If Automations are not enabled, or you do not have the correct subscription model, use the attached Qlik Script and Excel file to scramble the data.
To scramble data using an excel file and a Qlik script:
Binary;
$(Include=lib://NAMEOFSPACE:DataFiles/scramble.qvs);
CALL Scramble('lib://NAMEOFSPACE:DataFiles/scramble.xlsx');
Exit Script;
It is possible to generate a scrambled version of any app with the PowerShell script in the attached QlikScramble.ps1 file.
This can be done from any machine that can access the tenant where the app is located. The requirements for these options are:
Steps:
Idea / Feature Request - Data Masking
Alternative method using APIs and the DevHub: Data Scrambling in Qlik Sense
Scramble Sensitive Data In Qlik Sense Enterprise for Windows
To access the examples, visit the community post or download it from here.
This article explains how to extract changes from a Change Store and store them in a QVD by using a load script in Qlik Analytics.
The article also includes
This example will create an analytics app for Vendor Reviews. The idea is that you, as a company, are working with multiple vendors. Once a quarter, you want to review these vendors.
The example is simplified, but it can be extended with additional data for real-world examples or for other “review” use cases like employee reviews, budget reviews, and so on.
The app’s data model is a single table “Vendors” that contains a Vendor ID, Vendor Name, and City:
Vendors:
Load * inline [
"Vendor ID","Vendor Name","City"
1,Dunder Mifflin,Ghent
2,Nuka Cola,Leuven
3,Octan, Brussels
4,Kitchen Table International,Antwerp
];
The Write Table contains two data model fields: Vendor ID and Vendor Name. They are both configured as primary keys to demonstrate how this can work for composite keys.
The Write Table is then extended with three editable columns:
This article explains how to extract changes from a Change Store by using the Qlik Cloud Services connector in Qlik Automate and how to sync them to a database.
The example will use a MySQL database, but can easily be modified to use other database connectors supported in Qlik Automate, such as MSSQL, Postgres, AWS DynamoDB, AWS Redshift, Google BigQuery, Snowflake.
The article also includes:
Content
Here is an example of an empty database table for a change store with:
Run the automation manually by clicking the Run button in the automation editor and review that you have records showing in the MySQL table:
Currently, there is no incremental version yet for the Get Change Store History block. While this is on our roadmap, the automation from this article can be extended to do incremental loads, by first retrieving the highest updatedAt value from the MySQL table. The below steps explain how the automation can be extended:
SELECT MAX(updatedAT) FROM <your database table>
The solution documented in the previous section will execute the Upsert Record block once for each cell with changes in the change store. This may create too much traffic for some use cases. To address this, the automation can be extended to support bulk operations and insert multiple records in a single database operation.
The approach is to transform the output of the List Change Store History block from a nested list of changes into a list of records that contains the changes grouped by primary key, userId, and updatedAt timestamp.
See the attached automation example: Automation Example to Bulk Extract Change Store History to MySQL Incremental.json.
The provided automations will require additional configuration after being imported, such as changing the store, database, and primary key setup.
Automation Example to Extract Change Store History to MySQL Incremental.json
Automation Example to Bulk Extract Change Store History to MySQL Incremental.json
If field names in the change store don't match the database (or another destination), the Replace Field Names In List block can be used to translate the field names from one system to another.
To add a more readable parameter to track the user who made changes, the Get User block from the Qlik Cloud Services connector can be used to map User IDs into email addresses or names.
A user's name might not be sufficient as a unique identifier. Instead, combine it with a user ID or user email.
Add a button chart object to the sheet that contains the Write Table, allowing users to start the automation from within the Qlik app. See How to run an automation with custom parameters through the Qlik Sense button for more information.
Environment
This article answers the most frequently asked questions about Qlik Discovery Agent. It is split into five sub-sections:
If you are looking for information on how to get started, check out the Discovery Agent Interactive Walkthrough and our Discovery Agent Documentation.
Discovery Agent is an AI-powered, always-on monitoring capability in Qlik Cloud that automatically detects meaningful changes, anomalies, and trends in your data. It requires no rules, thresholds, or manual setup. Discovery Agent identifies spikes, drops, trend shifts, baseline changes, and data quality issues, then delivers clear, plain-language insights in a prioritized feed.
Traditional BI alerts rely on predefined thresholds or manual logic. Discovery Agent uses the Qlik Analytics Engine and its associative capabilities to evaluate wide combinations of data relationships automatically and proactively surface only those insights that matter. It is context aware, adaptive, and far more scalable than rules driven systems.
Yes. Discovery Agent is built directly into Qlik Cloud Analytics and leverages the Qlik Analytics Engine for associative, large scale anomaly detection.
Yes. You can ask questions directly from an insight card, and context from the insight will be transferred into Qlik Answers.
No. Discovery Agent is built exclusively for Qlik Cloud.
No. Monitoring runs outside active dashboards, ensuring no performance impact on live analytics experiences.
Yes. Insight delivery respects user permissions, governed access, and security boundaries.
Discovery Agent analyzes updated app data models using associative evaluation to identify:
No rules or thresholds are required.
Discovery Agent is always on, but processes changes when the application’s data model updates. Insights refresh after reload and appear in the feed once the system evaluates new data. Updated are currently capped at one reload per day.
The feed automatically refreshes upon reload. For most apps, this occurs once per day or whenever new data is introduced.
Yes. You can follow specific apps or insight categories once the Following tab is released. Filtering options are also planned to help tailor results.
Insight Triggers are structured metric definitions that serve as the foundation for generating analytical insights within the application. Each trigger is composed of a measure or expression, such as a calculated field or KPI, along with a set of additional configuration parameters. These parameters include the frequency at which the trigger evaluates data and the type of calculation to be applied (example: sum, average, count).
Together, these elements define the conditions under which an insight is surfaced to the user.
Yes, a date period is required for every trigger you configure.
All insights generated by the system are trend-based, meaning they analyze data over time to identify patterns, changes, or anomalies. This requires a date period to be added to the trigger's associated group. Without a defined time range, the system cannot perform the temporal comparisons necessary to produce meaningful insights.
The Insight Feed refreshes automatically each time the page is reloaded. No manual refresh action is required. The feed itself is regenerated once per day, and this regeneration is triggered by the introduction of new data into the application or applications that contain active triggers. As a result, the feed will always reflect the most recent data available as of the last daily reload cycle.
Filtering functionality is available in the Feed. A Filter button is currently visible at the top of the feed during the preview phase of the application. Users can use this to find specific insights in the feed.
Triggers are stored directly within the application in which they are created. They are not stored externally or in a centralized repository. That means each application manages its own set of triggers independently, and triggers defined in one application will not carry over to or affect another application.
Direct question-and-answer functionality within the feed is available.
The Insight Feed is integrated with Qlik Answers, enabling users to ask natural language questions without leaving the feed interface. Because each card displayed in the feed is tied to a specific application, context from the relevant card will be automatically transferred to Qlik Answers to ensure accurate, contextually appropriate responses.
This behavior is expected and occurs specifically after the first reload following the creation of new triggers.
During this initial reload, the system performs a comprehensive scan of all available historical data, rather than only the most recent data. This allows it to identify any and all qualifying insights across the full dataset. This is a one-time process. All subsequent reloads after this initial one will only evaluate and surface insights based on newly introduced data, so the volume of older insights will not continue to grow with each reload.
Yes. The Insight Feed and its associated trigger functionality require the cross-region inference toggle to be enabled. Please ensure this setting is activated in your environment before attempting to configure triggers or access the feed. If you are unsure how to enable the cross-region inference toggle, contact your system administrator or refer to the relevant configuration documentation.
To remove specific insights from the Insight Feed, you must delete the trigger that is generating those insights. Because the feed is dynamically generated based on active triggers, removing a trigger will prevent its associated insights from appearing in future feed reloads.
Deleting a trigger is a permanent action.
If you wish to stop surfacing certain insights temporarily, consider whether disabling or modifying the trigger may be a more appropriate course of action, depending on your platform's available options.
Section Access is not currently supported for applications used with the Insight Feed.
Any application that has Section Access enabled is incompatible with this feature at this time. As a result, all users who have been granted access to a given application will be able to see the insights generated from that application's triggers, regardless of any Section Access restrictions that may otherwise apply within that application.
This is an important consideration when deciding which applications to configure with triggers, particularly for datasets that contain sensitive or role-restricted data. Support for Section Access may be introduced in a future release.
Below is the minimum data requirement:
Weekly/Monthly/Quarterly/Yearly aggregation
Daily aggregation
Missing dates in the date field may prevent calculations. Creating a master calendar in the
load script can resolve this. Qlik is exploring options for date imputation.
If the Operations Monitor contains data that doesn't look reliable (for instance: some weeks contain no data), the content can be reset and recreated.
Qlik Sense Enterprise on Windows
For more detailed information about the Operations Monitor, and Qlik Sense's other monitor apps, see
Qlik constantly refines its Analytics, over time replacing old charts with new, modernized alternatives. These deprecations are announced well in advance and include instructions on how best to replace these old charts, whether that is to use a new one, several new ones, or to make use of new settings.
As an example, seven visualization bundle charts are scheduled for deprecation in May 2027, most of which have already been removed from the asset panel and are no longer in use in recent applications. See Upcoming deprecation of Qlik Analytics charts in May 2027.
Charts that are up for deprecation are often no longer in use. However, if you happen to still have a very old application and need to replace it, see Visualization bundle > Deprecated charts for more information on what to use instead. The list will be updated whenever a new set of charts is deprecated.
Qlik recommends reviewing your apps for old charts. Depending on your platform (Qlik Cloud or Client-managed), there are different methods you can deploy.
Qlik Cloud administrators should use the Qlik Cloud Monitoring Apps to track the usage. The App Analyzer has a sheet dedicated to where deprecated charts are being used on a tenant in Qlik Cloud. The App Analyzer is based on usage events rather than scanning every app. Use the App Analyzer to find which apps and sheets have charts that need to be updated to newer and more modern alternatives. The easiest way to install and update the Qlik Cloud Monitoring Apps is to use the automation template. If you already have the App Analyzer, just remove the automation and install a new one to get the latest version of the App Analyzer.
For client-managed installations, use the Monitoring apps. The Content Monitor app has a sheet for tracking deprecated charts. At reload, the Content Monitor app scans every app in the installation in order to list all applications and sheets that are using charts that are being deprecated. It also lists the installed extensions and their deprecation status. The Monitoring apps are bundled with the Qlik Analytics installation. The first version with the new sheet will be included in the May 2026 release. If you want to track usage in prior versions, the deprecated chart usage scanner will also be available on the product download page.
When clicking the meatball (ellipses) menu to view more options for an Analytics app, you will find two Share options:
How are they different?
While they are described differently in Apps (Insights and Analytics) | help.qlik.com, there is no functional difference.
It is a conscious design decision to cover certain keywords and let users find a term that matches their intent and confidently trust that the click will take them to the right features.
One single combined button would muddle clarity in the text and iconography, so it was decided to keep them separate.
After calling the Change Variable block in Qlik Automate, the changes made are not shown in the sheet.
In Qlik Cloud, automations make changes inside their own engine session. Those changes are not immediately visible to other sessions (such as in the Qlik Sense app UI) unless you explicitly save the app from that session.
Without a save, distribution to other sessions can take 20 to 40 minutes or not reflect at all in active UI sessions. The Save App block is intended ot be used in this instance.
Triggering the Save App block (available in the Qlik Cloud connector) signals the engine to do a DoSave, saving the app and reloading it in all open sessions.
Since the Save App block is computationally heavy and limited to one execution per session. Place it once at the end of your automation.
This article provides a practical guide for data modelers, BI admins, and analytics engineers.
Qlik Answers is a powerful solution - it lets your business users ask questions in plain language and get accurate, contextual answers directly from your data model. No dashboard navigation, no waiting on report requests. Just ask, and get an answer.
Out of the box, Qlik Answers already understands a remarkable amount of business language. But like any intelligent tool, the quality of its answers depends on the quality of what it has to work with. A data model with ambiguous field names or undocumented metrics might work fine when a developer manually hand-picks the right fields for a chart - but when an AI resolves a natural language question against that same model, those small inconsistencies start to matter.
Here’s a quick example. When someone asks “What’s our discount rate?”, Qlik Answers intelligently maps that question to fields in your semantic layer. If your model exposes Discount_Amount, Discount_Amount_Final_V1, Discount_Amount_Final_Sep24, Discount_Value, Discount1, and Discount2, the engine has to make a choice, and without clear naming, even the smartest AI can’t be sure which one you intended. It’s a signal that the model could use a little attention.
The great news is that with some straightforward preparation, you can unlock the full potential of Qlik Answers and give your users an experience that feels almost magical. This guide walks you through exactly how to get there.
If you’ve configured Business Logic for Insight Advisor before, you might be wondering: “Do I need to do all of that again?”
No - and that’s one of the best things about Qlik Answers. It uses an LLM-based approach that already understands common business language out of the box. Terms like “sales,” “revenue,” “customer,” “average,” and “quarter” just work. Standard aggregations, temporal concepts, and general business vocabulary are understood without any configuration on your part.
Where Qlik Answers benefits from your help is with your organization’s specific context. It doesn’t yet know that Discount1 is actually a coupon discount and Discount2 is a loyalty discount. And it can’t tell which of your three revenue fields is the current authoritative version. That is the context only you can provide.
With a few focused preparation steps, you’ll set Qlik Answers up to deliver accurate, trustworthy results from day one.
Three things worth doing before diving into your data model:
This tends to be the highest-impact change you can make. Ambiguous field names are the most common cause of incorrect field selection.
For every group of similarly named fields, ask: do these represent different business concepts, or are they redundant versions of the same thing?
If they’re different concepts, give them distinct, business-aligned names:
| Before | After |
|
Discount_Amount, Discount_Value, Discount1, Discount2 |
Product Discount, Promotional Discount, Coupon Discount, Loyalty Discount |
If they’re redundant versions, pick the authoritative one, create a master measure if the calculation is complex, and hide the rest using Business Logic visibility controls.
Naming principles:
Every visible field is a candidate answer to a user’s question, so fewer irrelevant fields means fewer wrong answers. A streamlined model is also faster to index.
Hide technical fields. In Business Logic → Logical Model → Visibility, set these to Hidden:
Consolidate redundant fields. If your model has Revenue_Old, Revenue_New, and Revenue_Current, users asking about “revenue” will get inconsistent results. It’s worth picking the authoritative version and hiding the rest.
Hidden fields remain fully functional for calculations, expressions, and existing charts. You’re only removing them from the Qlik Answers query scope, so nothing breaks.
Time-based queries are among the most common in natural language analytics (“revenue by month,” “trends over time,” “compare this quarter to last”). If your date fields are loaded as plain text, Qlik Answers won’t recognize them as dates. That means no auto-calendar, no chronological sorting, and no correct time-based analysis.
In Data Manager or Model Viewer, check the tags on every date-related field. You want Date or Timestamp tags. If you see $ascii or Text, fix it in the load script:
Date(Date#([SourceDateField], 'MM/DD/YYYY')) as [Order Date]
Timestamp(Timestamp#([SourceTimestamp], 'MM/DD/YYYY hh:mm:ss')) as [Order Timestamp]
After fixing, test with queries like “Show me trends over time” and “Sales by month” to confirm the engine applies chronological logic correctly.
Master items are one of your strongest levers for improving Qlik Answers accuracy - and this is where the platform really shines. When processing questions, Qlik Answers intelligently gives greater weight to master items than to raw fields in the data model, because it recognizes that master items represent curated business intent. It’s a great example of how the engine is designed to work with you.
For each of your top metrics, create a master measure with a validated expression and a clear description. The description matters - Qlik Answers uses it to understand context and match user intent. A good description explains what the metric measures, how it’s calculated, and when to use it.
For detailed guidance on writing effective master item descriptions, see the help documentation: Writing master item descriptions for Qlik Answers.
Qlik’s Business Logic vocabulary feature lets you define synonyms and map business terms to fields. It’s a useful tool, though you may need less of it than you’d expect. Because Qlik Answers is powered by an LLM, it already has a strong grasp of standard business terms: “sales,” “revenue,” “customer,” “average,” and “quarter” all work right out of the box. You only need to step in for the terminology that’s unique to your organization.
Where vocabulary adds value:
What to watch out for:
Configure in Business Logic → Vocabulary. Map each synonym to a specific field or master item, and test with queries using those terms to confirm the mapping resolves correctly.
It’s helpful to run representative queries across these categories and verify the results:
| Category | Example queries |
|
Basic aggregations |
"Total revenue," "Customer count," "Average order value" |
|
Time-based |
"Revenue by month," "Sales trends over time," "Compare Q3 to Q4" |
|
Filtered |
"Revenue for Product X," "Customers in Region Y" |
|
Comparative |
"Top 10 customers by revenue," "Highest margin product?" |
| Vocabulary
|
"Show me CAC," "What’s our churn rate?" (if configured) |
Use the reasoning panel. In the Source tab, click View Reasoning to see exactly which fields the engine selected and why. This is the fastest way to diagnose incorrect results and trace them back to a semantic layer issue.
For each test query, check:
If a query doesn’t resolve correctly:
You don’t need a perfect data model to get great results from Qlik Answers. You just need a clear one.
There’s no need to define what “revenue” or “quarter” means. By making sure your model is unambiguous, your dates are properly typed, your key metrics are defined, and your field list is clean, you’re giving Qlik Answers everything it needs to deliver the kind of instant, accurate insights your business users have been waiting for.
These are established data modeling best practices that have always mattered — Qlik Answers just makes the payoff more immediate and visible. Invest a little time in preparation, and you’ll be amazed at what your users can accomplish.
For the complete technical reference, including detailed guidance on field naming conventions, master item descriptions, and synonym configuration, see the official documentation: Best practices for preparing applications for Qlik Answers.
Insight Advisor does not filter data when a sheet is using Alternate States. Instead, it operates exclusively in the default state.
This is working as expected.
As of January 2026, Insight Advisor is no longer actively in development. Look into Qlik Answers for a feature-rich replacement (available on Qlik Cloud).
Qlik allows you to automatically make multiple selections when opening an app sheet. This is configured in the Sheet Properties using an Action:
If Properties does not show the Actions tab, but instead lists Chart suggestions and other data display options, deselect the currently selected chart.
A;B or value1,value2) The defined selections will now apply whenever the sheet is opened.
When an On-Demand App Generation (ODAG) link is created in a selection app and the app is transferred to another owner, then the new owner can only see the option "Add to App Navigation" in the right-click context menu. Options "Edit" and "Delete" are missing.
The same issue happens when the selection app is duplicated by another user.
This is a known limitation of Qlik Sense and has been reported in defectQLIK-83203.
There are default security rules: CreateOdagLinks and ReadOdagLinks.
But no default rule for Update/ Delete of ODAG links.
A work-around solution at the moment is to create custom security rules that grants Update/ Delete access of ODAG links to the new app owner, similar to the followings:
ODAG links are meant to be managed similar to Data connections, where a connection created in one app can be used in other apps. However, while the QMC provides a Data connections tab to list down all connections and control related ownership/ permissions, such management GUI is not available for ODAG links. R&D is considering the ODAG link management page in future releases of the product.
When you need to integrate auth0 JWT Bear Token auth with Talend tRestRequest component, it is possible to use JWT Bearer Token with Keystore Type : Java Keystore *.jks to achive this.
Please follow the some similar steps from Obtaining a JWT from Microsoft Entra ID | Qlik Help
-----BEGIN CERTIFICATE-----
MGLqj98VNLoXaFfpJCBpgB4JaKs
-----END CERTIFICATE-----
keytool -import -keystore talend-esb.jks -storepass changeit -alias talend-esb talend-esb.cer -noprompt
Security: JWT Bearer Token
Keystore File: /path_to/talend-esb.jks
Keystore Password : changeit
Keystore Alias : talend-esb
Audience: "https://dev-xxxx.us.auth0.com/api/v2/"
A binary load command that refers to the app ID (example Binary[idapp];) does not work and fails with:
General Script Error
or
Binary load fails with error Cannot open file
Before Qlik Sense Enterprise on Windows November 2024 Patch 8, the Qlik Engine permitted an unsupported and insecure method of binary loading from applications managed by Qlik Sense Enterprise on Windows.
Due to security hardening, this unsupported and insecure action is now denied.
Binary loads of Qlik Sense applications require a QVF file extension. In practice, this will require exporting the Qlik Sense app from the Qlik Sense Enterprise on Windows site to a folder location from which a binary load can be performed. See Binary Load and Limitations for details.
Example of a valid binary load:
Binary [lib://My_Extract_Apps/Sales_Model.qvf];
Example of an invalid binary load:
"Binary [lib://Apps/777a0a66-555x-8888-xx7e-64442fa4xxx44];"
NPrinting has a library of APIs that can be used to customize many native NPrinting functions outside the NPrinting Web Console.
An example of two of the more common capabilities available via NPrinting APIs are as follows
These and many other public NPrinting APIs can be found here: Qlik NPrinting API
In the Qlik Sense data load editor of your Qlik Sense app, two REST connections are required (These two REST Connectors must also be configured in the QlikView Desktop application>load where the API's are used. See Nprinting Rest API Connection through QlikView desktop)
Requirements of REST user account:
Creating REST "GET" connections
Note: Replace QlikServer3.domain.local with the name and port of your NPrinting Server
NOTE: replace domain\administrator with the domain and user name of your NPrinting service user account
Creating REST "POST" connections
Note: Replace QlikServer3.domain.local with the name and port of your NPrinting Server
NOTE: replace domain\administrator with the domain and user name of your NPrinting service user account
Ensure to enter the 'Name' Origin and 'Value' of the Qlik Sense (or QlikView) server address in your POST REST connection only.
Replace https://qlikserver1.domain.local with your Qlik sense (or QlikView) server address.
Ensure that the 'Origin' Qlik Sense or QlikView server is added as a 'Trusted Origin' on the NPrinting Server computer
NOTE: The information in this article is provided as-is and to be used at own discretion. NPrinting API usage requires developer expertise and usage therein is significant customization outside the turnkey NPrinting Web Console functionality. Depending on tool(s) used, customization(s), and/or other factors ongoing, support on the solution below may not be provided by Qlik Support.
Monthly monitoring of the data volume used in Qlik Cloud (Data for Analysis) is essential when using a capacity-based subscription.
This data is accessible in the Qlik Cloud Administration Center Home section:
For an overview of how the Data for Analysis is calculated, see Understanding the subscription value meters | Data for Analysis; its calculation considers the size of all resources on each day, and the day with the maximum size is treated as the high-water mark, which is then used for billing purposes.
However, you may sometimes notice that the usage does not decrease as expected, even after reducing your app data. In such cases, it is recommended to review unused or rarely reloaded apps, as the previous app reload size may still be used for the calculation.
To review the detailed usage, you can use a Consumption Report.
To prevent the previous reload size from being carried over into the following month in similar use cases (specifically for apps that are not actively in use), a possible workaround is to reload apps using small dummy data to update the previous reload size of the apps.
Note that while offloading QVD files to, for example, S3 incurs no cost, any subsequent reload of those QVDs into Qlik Cloud will be counted toward Data for Analysis. Users should carefully evaluate whether this approach is beneficial.
Example case:
The app is reloaded only once a month (or even less frequently) for the purpose of creating QVD files. At the end of the script, all tables are dropped, and the final app size is empty.
In this scenario, the usage of Data for Analysis won't be reset in the following month since it takes into consideration the size of loading the app. Therefore, the loading size continues to be charged in the next month as a previous reload.
Qlik Cloud Analyticsの キャパシティ容量の仕様に関する解説(Explanation of Qlik Cloud Analytics capacity specification)
The error System.Byte[] was occurring when attempting to load data from a binary data type column from MS SQL Server database.
Environment:
Qlik Sense Enterprise on Windows any version
This issue was resolved by creating a new column in the SQL Server database and converting the column to be varchar data type. Then this new varchar column could be read into Qlik Sense without any error.
This type of conversion function was used in the database in the process to create the new column:
Convert(NVARCHAR(MAX), "FieldName", 1) as Varchar_FieldName.
See Data Types for available Data Types in Qlik Sense.
How to reduce the size of the Operations Monitor App or decide how much history is being stored.
Scenario: There are only 3 months of history/data in Operation Monitor app. Even though there are enough logs in Archived log folder to provide information for more months.
Scenario 2: There are 3 months of history/data seen but less data is required due to too high traffic.
This article explains how Operation monitor app can be displayed more or less than 3 months of history/data. Note: We do not recommend configuring the Operations Monitor to provide a history longer than 3 months as this amount of data will lead to long loading times and large apps.
"SET monthsOfHistory = 3" -- 3 Months
"SET monthsOfHistory = 6" -- 6 Months
"SET monthsOfHistory = 12" -- 12 Months
Note:
Please note that the reload time for that long of data will be rather long.
The reload task fails with a message like this in the document log:
or
2017-11-10 10:16:48 0454 WHERE isnum(Sequence#)
2017-11-10 10:16:48 Error: Field 'Sequence#' not found
2017-11-10 10:16:48 Execution Failed
2017-11-10 10:16:48 Execution finished.
or
Sequence# field not found in 'lib://SHARE/Repository/Trace/SERVERNAME_Synchronization_Repository.txt'
The steps below apply where it cannot find any field. The field that cannot be found includes but is not limited to CounterName, ProxySessionID.
Environment:
QLIK-35804: Occasionally when Qlik Sense services stop, they do not fully write to the logs in the expected format.
Restart the Qlik Sense services
Modify the License and Operations Monitor apps such that it will continue parsing logs even if it fails to fully parse a particular log.
//begin ignoring errors parsing logs set errormode = 0;and
//end ignoring errors parsing logs set errormode = 1;This will look something like this: