Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
This forum is where any logged-in member can create a knowledge article.
Many of us likely know that after developing a Qlik Sense app, documentation needs to be created to deploy our analysis to production. This process is time-consuming, often requiring several days to produce what feels like just a "piece of paper". Another issue is that documentation is often made using standard templates, leading to materials that aren't complete and can be hard to comprehend. Documentation can be mandatory for both regulatory and operational reasons. Following the idea of “working smarter,” I want to automate this process using DocuGen with Qlik Application Automation (QAA). This approach can save us valuable time, allow us to transfer use cases to production faster, and provide documentation that is both thorough and easy to read.
It all started with this simple idea and a first prototype, which I built in QAA. Various Qlik Cloud Services blocks are used to extract the relevant metadata from the individual app (e.g. table definitions, sheets and visualizations, variables, measures and dimensions, etc.). This data is then stored in variables and once all the information has been collected it gets passed into a “Custom Code” block. This block uses Python 3.11 and creates a dynamic HTML file without external references. This means that the created HTML file remains transportable and does not require any external resources, as it contains all the necessary functions itself. Finally, the created HTML file is saved to MS Sharepoint.
This approach enables us to produce standardized documentation significantly more quickly on a daily basis, reducing the time required from an estimated 3 to 5 days to approximately 30 minutes. In my day-to-day job as a BI Specialist, I am now able to transfer analysis prototypes to production much faster for my customers and deliver added value for the business departments even faster. I think many other people in similar roles like me face the same challenge. This is the reason why I created a community project out of this idea together with Dennis Jaskowiak (Qlik PreSales). We started to extend the documentation with more functionalities, generalized the design of the documentation and reworked the code. In this article, we would like to share the project and its content with the Qlik Community.
The archive consists of only 4 files:
To import the QAA you need the DocuGen v(x).json file. This file contains the exported QAA workspace.
The "code" folder contains the code segments used if you want to customise the code for your needs.
custom_code_block.py: This file contains the code that we use in the "Custom Code"
blog in the automation.
used_css.css: This file contains the CSS definition that we use in the "Variable - vCSS"
blog in the automation and imports the style description for our HTML file. The definition is not minified in this file.
used_js.js: This file contains the Java scripts that we use in our Python script (custom_code_block.py at the very end). The JavaScript is implemented in the Python code (last <script></script>
section). It is important that '{'
and '}'
are replaced with '{{'
and '}}'
so that they are not evaluated in Python.
“Connection is linked”
you can use it by clicking on your created connection. The connection tab should now change its colour from red to black.“Variable – vNumber Format”
from 'de'
to 'us'
. It’s the second block of the automation.
When you click on “Run” the automation gets executed. By default, the automation has configured “Run Mode: Manual”. It requires a few inputs for processing:
If you want to run the automation triggered (e.g. by REST call) it needs to me modified. We will post an example later to showcase this option.
This list showcases what DocuGen covers:
General app information:
Data model:
Measures:
Dimensions:
Variables:
Sheets & Visualizations:
Load Script:
This is it for now. I hope this community project can help you automate and reduce the time you spend on documentation. On the bottom of that article, you will find the necessary files. The current version of the files will be hosted on GitHub.
Link to GitHub : LINK
A special thanks to Emil Koslowski (Product Manager Qlik Application Automation) for his invaluable support with various Qlik Application Automation questions. Your efforts have been instrumental in shaping a robust solution. Additionally, a big shoutout to Dennis Jaskowiak (PreSales Qlik) for his genuine enthusiasm for the idea, as well as his mentorship and unwavering support throughout the development - your guidance has been truly instrumental. Thank you!
This project has been created by Prodromos Chatziagorakis (Volkswagen Financial Services) & Dennis Jaskowiak (Qlik).
Before you start reading: The way to import .json with Qlik Sense script is undocumented and may still change at any time without notice.
Qlik script natively reads JSON (JavaScript Object Notation), it is a bit tricky and slower for large files - and an import wizard is missing, which would create the script blocks.
When you click on Add data in the connection pane, the file dialog for the folder will show no specific filter for .json files:
Choose "All files" and then select your .json file. Next you will see, that the wizard attempts to interpret it as csv. Pointless, but at least you get the FROM construct properly filled. Forget the rest, as we will replace it in a minute.
Just hit "Insert script" and then put LOAD * in front of FROM, and instead of (txt ...) put (json):
data:
LOAD *
FROM [lib://:Google_Drive/119UHVSR9_JjkT9o-mz2NAO1xyS1xU56f/01.json]
(json);
Not too complicated so far. If your .json file only has one root object (or one root array of objects) and you will already get the keys from the top level. You are already done.
Now lets learn more about the parsing!
A singe json file containing only one object ...
{"key1": 123, "key2": "abc", "key3": true, "arr": [], "obj": {} }
... will result in one line of data.
If the json file has multiple "rows" (objects in an outer array [ ... ]) you automatically will get multiple rows in Qlik.
[ {"key1": 123, "key2": "abc", "key3": true} , {"key1": 234, "key2": "def", "arr": [5,6,7]} ]
The arrays can have different keys. Common keys will concatenate in the same field, whereas keys missed out in a given 'row' will read "null" (unfortunately as text, not as Null()) in the result.
Now lets look, how to deal with sub-objects and arrays? For this we need understand the table is construct and the hash values.
That is fine for one row, but if you have a json file with many rows, you don't want to call every line with "Root/0/arr", "Root/1/arr", "Root/2/arr" ...
The good thing: You don't have to do it yourself. This article is explaining how to load .json by hand. If you want a quick solution, this one is for you: https://github.com/ChristofSchwarz/QlikScripts/tree/master/json
Assuming a .json file with this content
[ {"name": "Alex" , "age": 50, "kidsAge": [25,23,7,4] } , {"name": "Dhruv" , "age": 27, "kidsAge": [1] } , {"name": "Eyal" , "age": 35, "kidsAge": [] } , {"name": "Chris" , "age": 49, "kidsAge": [17,19] } ]
We will introduce a FOR loop now. Note, if an array is empty [] or is null, the LOAD command inside the Loop would fail on that "row", so we set ErrorMode to 0 (=continue script upon error) and reset it to the default 1 (=fail on error) after the NEXT command
SET vFile = [lib://.../filename.json]; root: LOAD name, age, kidsAge AS %kidsAge FROM [$(vFile)] (json, table is 'Root'); SET ErrorMode = 0; FOR v = 0 TO NoOfRows('root') - 1 kidsAge: LOAD DISTINCT kidsAge AS %kidsAge, NoName AS kidsAge FROM [$(vFile)] (json, table is 'Root/$(v)/kidsAge'); NEXT v SET ErrorMode = 1;
The result is a nice 1:N relation resolved.
The fieldname "kidsAge" also exists on the inner LOAD (next to "NoName") and it returns - yes - the same hashvalue as in the outer, first LOAD.
Last example is for a sub-object (instead of the sub-array): Similarily, you get hashes on the parent level and the content itself when addressing it right with the "table is" position. A FOR loop will ease things again, plus some more logic.
In this .json file we have one row where "kids" is null and the subobject doesn't always contain the key "happy"
[ {"name": "Alex" , "age": 50, "kids": {"count": 4, "happy": true } } , {"name": "Dhruv" , "age": 27, "kids": {"count": 1, "happy": true } } , {"name": "Eyal" , "age": 35, "kids": null} , {"name": "Chris" , "age": 49, "kids": {"count": 2 } } ]
We can't count on auto-concatenate any more for the sub-objects (row 4 would end up in a different table than rows 1 and 2). Also, without error-tolerance, the 3rd row import would fail. So we go for this script, where we create an empty table for the "kids" and then explicitly concatenate into it. We have to use the LOAD * functionality, because the structure of the subobjects is not identical throughout the main array. (Assuming you set the vFile yourself)
SET vFile = [lib://..../filename.json]; t_Root: LOAD // outer load * FROM [$(vFile)] (json, table is 'Root'); SET ErrorMode = 0; t_Kids: LOAD * INLINE [kids]; // start empty table FOR v = 0 TO NoOfRows('t_Root') - 1 CONCATENATE(t_Kids) LOAD DISTINCT // inner load * FROM [$(vFile)] (json, table is 'Root/$(v)/kids'); NEXT v SET ErrorMode = 1;
We will get now two tables, that are 1:1 related (or 1:1 or 0, to be precise)
Note: the inner LOAD * will return the "kids" as a field, too, containing the hash that links to the first table
The result can easily be used now in the Client
Uff, it can get tricky, but next week I will show you that all the manual steps (loop, field aliasing, concatenating) could be solved by a recursive SUB.
I hope though, that Qlik will also finish a wizard, that does that parsing on import like we have it for JSON objects received via the REST Connection or, likewise, as XML files are parsed.
Pros
Cons
Thank you for the reading.
More detailed information about the Limitation for Generic Link
You can create, upload, edit, move, and delete links from the Generic links page.
Do the following:
In the Administration activity center, go to Generic links.
Click Create new.
To add an image, drag and drop an image file or click Browse to select the file.
Enter the following link details:
Name: Enter a name for the link.
Link URL: Enter the URL in the format https://domain.com.
Only HTTPS URLs are allowed.
Domain names can contain letters (a-z), numbers (0-9), and hyphens (-), but not underscores (_) or the '@' symbol. For example, https://my-domain.com/ is valid, while https://my_domain.com/ is not.
Space: Select the space where the link will be added.
Description (optional): Enter a description for the link.
Tags (optional): Add tags to help categorize the link.
Click Create.
HLP-17441
This will be released on the next documentation update in Helpsite
Compare Infoproviders between BW HANA environments using Gold Client BW
One of the most asked and key features of Gold client BW is to provide the users with a feature to Compare infoproviders between two BW HANA environments. This unique feature also list’s out the differences of the object between the two environments.
The function allows the comparison of various types of Infoproviders like Info Objects, Info Cubes, DSO and ADSO’s structures and properties between the current BW system and a target BW system.
Upon entering one or multiple Info Objects, Info Cubes, DSO’s or ADSO’s the function will compare the structure and properties of the same against the Info Objects, Info Cubes, DSO’s and ADSO’s in the target system specified by the user.
Example:
Source BW Environment = A4H(BWonHANA)
Target BW Environment = BW2(BW4HANA)
infoprovider to be compared between two BW environments = 0MATERIAL
Login to the source BW system which in this example is A4H and use the tcode ZGOLDBW for Gold Client BW
Click on Compare Info provider button under the Infoprovider Equalizer section as shown in the figure below
In the next screen we need to provide the details of the info provider which we would like to compare with the Target BW system. Below are the steps to be followed
Compare Info Provider for on your target system.
Once the execute button is pressed the user will be asked to login to the target BW system which in this example is BW2
After logging in successfully, the system will compare the infoprovider between the two BW environments and will list out the differences of the infoprovider between the environments.
In the below example of 0MAERIAL we can see that the length, output length, number of attributes and the attributes which is missing or differ from the source BW system will be displayed.
Similar to 0MATERIAL the user’s will be able to compare the below different types of infoprovider between two BW HANA and BW Non-HANA environments
Looking for the simplest way to create a Gantt chart in Qlik Sense to further streamline data-driven project management in your company or organization? We’ll show you one right now! Follow this quick tutorial and you’ll learn how to easily visualize project tasks and activities as data in a matter of minutes — on smart, interactive Gantt charts using the intuitive Gantt Chart extension for Qlik Sense by AnyChart, Qlik's Technology Partner.
Watch the video tutorial:
Or read the text-and-pictures version below.
In Qlik, everything starts with the data.
We are going to create a Gantt chart based on data from an XLSX file that looks as shown below.
The tables in the file represent the structure of a company and its project schedule. There are three departments, each of them carrying out several projects. Each task within a project is assigned to a certain employee.
First of all, we will create an app, add this data to Qlik, apply the suggested associations, and load the data.
Then we need to make sure that Qlik established proper associations between tables.
Let’s create a sheet and add three filter panes by dropping three fields on the sheet: ‘department_name’, ‘project_name’, and ‘task_name’.
If we switch from ‘Edit’ to ‘View’ and try to select a department and a project, we will see that associated tasks are displayed correctly.
Now that the data is loaded and the associations between the tables are set up the right way, we can add a Gantt chart.
Go to ‘Custom objects’, find AnyGantt, and drop a Project Chart on the sheet, below the filter panes we added before.
To configure the Gantt chart, we need to add dimensions representing the hierarchical structure of our data.
We will use the following dimensions: ‘department_name’, ‘project_name’, and ‘task_name.
The next step is adding measures.
Let’s use the ‘start_date’ and ‘finish_date’ fields.
These measures do not require aggregation, so we can use either the ‘Only()’ aggregation or just the name of the field.
Then we need to make sure that AnyGantt interprets our dates correctly.
Use the ‘Map as’ drop-down menu to map the ‘start_date’ field as ‘Actual Start’ and ‘finish_date’ as ‘Actual Finish’.
As you can see, you can also map a date as ‘Baseline Start’, ‘Baseline Finish’, ‘Actual Progress’, “Baseline Progress”, or “Data Field”.
After these steps, we can already see a Gantt chart:
The data grid on the left displays the hierarchy of departments, projects, and tasks.
The timeline on the right displays the duration of tasks.
Let’s add one more measure, ‘progress’, and map it as ‘Actual Progress’.
Please note that you do not need to aggregate it.
Now you can see progress bars.
That is it, the Gantt chart is ready:
In the ‘View’ mode, select a department or a project in a filter pane, and the Gantt chart will display the related tasks.
It is very easy!
If you have any questions on how to create a Gantt chart or any other chart type using AnyChart's Extensions for Qlik Sense, drop us a note in the comments below or contact AnyChart's Support Team directly.
Aside from filtering data records with Section Access, I'm usually requested to filter the objects and sheets to which users have access. To be honest, I don't like to filter individual objects since this implies a lot of mainteinance and a more complex Section Access table. So I usually describe the scenario to the IT people and Key users and they always ( so far ) decide to go for the Sheets-Security option. Which in my opinion is the best alternative because it's more common to hide a complete block of information/analysis from a user than just a single KPI.
I've seen some examples like this before in the Community, however everything I've found its exemplified based on the USERID field. So, when I tried to use those examples I ran into some common errors when dealing with Section Access, and now that I've solved them I want to share a small example about how to go when using NTNAME.
Also, I want to note that there are some easier/faster alternatives to this approach. For instance you may use the MATCH() method described in this thread: Secured sheets
However if you're already using Section Access to filter data, you have too many users and/or sheets to use the MATCH() method, or if you just need to handle those privileges outside of the .qvw ( e.g. a user will maintain an excel file), you can use/reuse Section Access to accomplish this requirement.
In my example I use INLINE loads in order to allow other people to reload the app, however you could load all tables, including the SECTION ACCESS table, from xls or qvd files.
So, in order to test this example you have to do the following:
Go to the following link to get the example file:
Sheets Security with Section Access File
I hope this example is useful and don't hesitate to ask if you have any doubt regarding it.
Qlik data replication tasks from DB2 iSeries can fail due to communication errors with the source system. A common cause is that log stream tasks fail to resume because the CURCHAIN journal receiver has exceeded its connection limit. Below are the details of this issue and recommended solutions.
A log stream task configured with the Advance Run option, starting from a specific timestamp, failed and returned the following error:
SELECT journal_code, journal_entry_type, sequence_number, commit_cycle, entry_timestamp, object, cast(null_value_indicators as VARBINARY(8000)) null_value_indicators, count_or_rrn, receiver_library, receiver_name, "CURRENT_USER", job_name, program_name, cast(entry_data as VARBINARY(32740)) entry_data
FROM TABLE(qlikudtf.qlikrje01_fnc(JrnLIB=>'$$JRN01',JrnNAME=>'DJ010A', RcvLib=>'*CURLIB',RcvName=>'*CURCHAIN', StrTimStmp=>'2024-10-03 05:32:49', JrnCodes=>'CDFRJ', JrnEntryTypes=>'PT,PX,UB,UP,DL,DR,BR,UR,CG,DF,SC,CM,CR,RB,FN,PR', numRows=>'10000', rcvLen=>'4194304', Files=>'F03B11 PDDTA')) AS J
The system logged this error message:
00051632: 2024-10-03T01:39:19:495841 [SOURCE_CAPTURE] Failed to execute the Qlik UDTF. At least one of the captured tables might be exclusively locked. [1022511] (db2i_endpoint_capture.c:1643)
Behavior When Resuming from Timestamp: When a task is resumed using a specific timestamp, the system references the journal receiver library *CURLIB
and the receiver name *CURCHAIN
.
The following warnings were observed:
00051632: 2024-10-03T01:39:14:491526 [SOURCE_CAPTURE] Execute status=1022502, native=-443 (db2i_endpoint_capture.c:1602)
00051632: 2024-10-03T01:39:14:491526 [SOURCE_CAPTURE] Execute status=1022511, native=-502 (db2i_endpoint_capture.c:1602)
The CURCHAIN journal receiver has a connection limit (in this case, 1024 connections). When the environment exceeds this limit, the task fails if it attempts to resume from a timestamp.
However, if the task is resumed normally (without a timestamp), the system reads from a different receiver name, bypassing the CURCHAIN limit, and the task runs successfully.
To resolve the issue:
This video series is designed to introduce you the Qlik NPrinting PixelPerfect report editor.
You can download this tutorial version also in PDF. The PDF contains all the formulas that are used in these tutorials.
Data Transformation functionality was introduced in Gold Client version 8.0. It allows the transformation of fields during the Export of data in Gold Client. This functionality is delivered with default configuration; however, users are provided the flexibility to define their own data transformation routines for any table and field.
When using Qlik Gold Client, you are unsure if the transformation is working.
Here is a step by step to validate all the stages of transformation.
In this example, transformation rule BP01 (designed to transform Business Partner data) is scrambling BUT0CC-CCNUM (Payment Cards Number) and EKUN-SOC_SECURE (Social Insurance Number).
This assignment is important because the data transformation will occur automatically during the export process but only for those data types which have a rule assigned.
To provide an example, transformation rule BP01 (designed to transform Business Partner data) could be assigned to Client Construct data type BUSINESS PARTNERS and/or to Data Echo data type CA - BUSINESS PARTNERS.
Users who initiate exports do not need to take any actions for transformation to occur because of this rule assignment.
For more information, see the Data Transform User Guide found at:
Tags:
Labels
SAP Flexible Workflow is a new concept introduced in SAP S/4HANA to simplify workflow configuration. community.sap.com -> Flexible Workflows
If you have recently used Qlik Gold Client to copy SAP Flexible Workflows into your target client and there are some steps missing, the most likely root cause is that table SWF_FLEX_PROC has RAW fields and Gold Client can have trouble determining the full length of these fields.
Please check the instructions below:
Gold Client > Client Exporter > WORKFLOW FLEX TABLES
In this case, check the Gold Client Configuration before the export to ensure that the Table Compression Type for SWF_FLEX_PROC (or any other related table with RAW fields) is defined as New Compression.
Gold Client > Configuration > Data Copy Toolbox > Set Table Compression Type
If you cannot resolve the issue, freely submit a support case, and the Qlik Gold Client team will respond accordingly.
Note:
If Client Construct Data Type WORKFLOW FLEX TABLES doesn't exist, it can be included in the data model, adding all SWF_FLEX_* relevant tables.
Tags:
Labels
XXXX is your API Key
Method: POST
Body:
{"contents": {"role": "user","parts": {"text": "Give me a recipe for banana bread."}},
"safety_settings": {
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"threshold": "BLOCK_LOW_AND_ABOVE"
},
"generation_config": {
"temperature": 0.9,
"topP": 1.0,
"maxOutputTokens": 2048
}
}
ALLOW WITH CONN
Then in your script you need to create your question in a variable (like vResponseBody) and use in the Query
WITH CONNECTION (BODY "$(vResponseBody)");
Relative links
https://www.reddit.com/r/qlik_sense/comments/1f6m35g/how_to_use_gpt_free_to_analyze_social_media_in/
Good day Qlik Community!
Here I am again with another video. In this one, a bit longer than what I hoped for, I discuss what are Correlation and Mutual Information (MI), how, why and when to use it in Qlik.
Check it out:
Mutual Information and Correlation (datavoyagers.net)
Good day Qlik Community!
Here I am again with more content from Data Voyagers blog.
More often than not, your data could lead you to believe something that is not true. A shallow analysis might indicate a certain trend or bias but in really the opposite happens. In this video, I explain one of this common traps, the Simpson's Paradox, and how to identify it using a Qlik application.
Watch it here:
Hello Qlik Community,
As we approach the Paris 2024 Olympics, I have been posting on LinkedIn some facts and stories about the results from past Summer Olympics editions.
Everything has been done with Qlik Cloud without any extensions.
Here you will find the link to all the posts.
- Enjoy 🙂
Qlik Cloud Analytics has a very powerful and easy to use data loading and transformation framework. This includes no-code data flows and Qlik scripting for advanced transformations. However, there are times when a customer may prefer to handle these transformations as much as possible outside of Qlik Analytics. Maybe the data is being used by Multiple Qlik Analytics environments, in a multi-tenant or hybrid architecture. Or maybe the data set needs to be consumed by other tools in addition to Qlik Analytics. You may also be looking to provide ready to use tables for Self-service analytics. In these situations, it makes sense to move any logic earlier in the lifecycle to avoid duplication and so all tools can benefit from the transformations.
The aim of this guide is to provide some recommendations on how to model your database for minimal transformations, or transformation-free consumption by Qlik Analytics. We cover our top ten recommendations, that if followed reduce, if not eliminate the need for extensive transformations within your Qlik analytics applications. While some of these recommendations assume a database as the data source, most are applicable across all data sources.
There are many powerful tools in Qlik’s Data Integration portfolio that can be used to achieve this, such as Talend Studio, Qlik Talend Data Integration or Qlik Compose. Qlik Talend Data Integration can even be used to replicate tables to QVDs in Qlik Cloud using Change Data Capture (CDC). For more information on these solutions, see Data Integration and Quality. However, this document will focus on the end goals, rather than the tools used to achieve this.
I am going to start with what is a somewhat controversial, but I believe necessary step for many organisations that wish to minimise downstream transformations. Views in an RDBMS allow use to encapsulate all the other changes we will need to make into a virtual table than we can simply read from our Qlik Analytics app. There are a few assumptions behind this approach though that if not true may render this unnecessary or undesirable. These assumptions are:
Often when building a data warehouse, data lake or Lakehouse we tend to include all possible columns and data so it is available at a later data to meet changing requirements. However pulling this data into a Qlik Sense app is not recommended unless there is a use for it. This is because Qlik Analytics uses a very powerful in-memory associative model to manage your data and bringing unused data into memory will have a performance impact.
It is quite common for tables in databases to include standardised columns. For example: CreatedBy, CreatedDate, etc. These columns are useful for organisational governance and audit requirements, but will cause problems in Qlik Analytics as Qlik will associate tables based on the identically named columns. Assuming you can not exclude the columns, renaming them e.g. myTable_CreatedBy, myTable_CreatedDate, etc. will avoid this issue.
Qlik does not support composite (multi-column) keys in the same way databases do. When Qlik encounters a multi-column key it creates a synthetic key based on all the possible values for the keys. Synthetic keys use extra memory and can impact performance, as well as increasing the complexity of the analytics application. They are often not required and are due to factors such as standardised names (see #3). If there is not a natural key, you may be better to use an Autonumber or guid function to create a unique key. See Synthetic keys for more information.
Closely related to #4, Qlik expects primary and foreign key columns to have the same name. So if your primary key is person.id and your foreign key is role.person-id they will not join by default. The solution is to rename person.id to person.person-id.
There are no benefits from a highly normalised data model in Qlik. Qlik stores values in symbol tables so there isn't much extra data storage required for denormalised tables and extra joins impact performance. Therefore denormalising before it reaches Qlik removes the complexity of having to do this afterwards.
Closely related to #6, Qlik Analytics performs best against a Star Schema, and well against a snowflake schema. This is also true of many other query tools so transforming into one of these Schema styles in your database will provide value across your data landscape. These transformations from a normalised schema can sometimes be easier to make in SQL outside of Qlik Analytics than if they need to be hand-coded in Qlik Script. See star and snow & When do we use Snow Flake schema in Data Model?. And as you ideally only want a single fact table, you may want to consider concatenated fact tables (see Concatenate vs Link Table ).
when loading data from a source database, It is common to filter the data to provide just what is needed for your use-case. It is possible to filter any data source, however if this is done within a Qlik Analytics load statement, the data will first be read by Qlik before excluding that which does not meet the filter. This increases load times, network bandwidth and resource usage. If however, the filter is placed on a select statement, then this will be pushed down to the source and data excluded by the filter will never be sent to Qlik. In the following example, the end result is the same. In the former however the filtering is done once the data has been read by Qlik. In the latter, it is pushed down to the database and is never seen by Qlik.
If this filter is static, it would make sense to include it as part of the view to ensure self-service users do not load unnecessary data.
Analytics rarely involves drilling down to the individual transactions. Therefore often less precision is needed for analytics as is needed for an operational system. Rounding down (say from 8 to 4, or even two decimal places) saves space, reduces data transferred and improves performance. If you are doing this it makes sense to do it before it gets to Qlik and this could be embedded in a view. For example if I have a ‘rate’ field that allows 6 decimal points, but IO only need 2, I could use a statement like “cast(rate as NUMERIC(6, 2) )”. Doing this before it gets to Qlik will provide better consistency across your Qlik Landscape and provides better performance to your self-service users.
Remember when we said to exclude columns not relevant for analytics (#2) ? Well there is an exception to this and it relates to the need to understand when data was changed so you can perform incremental loads. Storing the last modified date allows us to only look at the data changed since our last reload, so will make a big difference to performance for incremental loads.
The suggestions on this list are not the only things that you can do to make your source database Qlik analytics ready however they deserve consideration as they can provide a lot of value. And this is not the only approach you can take – there may be good reasons for for taking a different approach in some of these areas, however at least considering these facors will help you to build an analytics ready database.
What other suggestions do you have? What has your organisation done to create a Qlik Analytics ready environment? Let me know in the comments!