There have been many new capabilities that give developers ways to customize and style an app. In this blog, I will review how the sheet header and toolbar can be toggled on and off and the benefits of each, as well as things to consider. The sheet header and the toolbar both appear at the top of an app. The sheet header, outlined below in yellow, includes the name of the sheet, an optional logo or image, and previous and next sheet navigation arrows.
The toolbar is the row above the sheet header. It includes buttons and links to Notes, Insight Advisor, selections tool, bookmarks, sheets and edit sheet.
The toggle for the sheet header and toolbar can be found in the app options section of an app. Open app options by clicking on the arrow next to the app name at the top center of the app. From there, click on the App options icon on the right.
Once the app options are open, you will find the toggles for Show toolbar and Show sheet header.
One of the main benefits of removing the sheet header and toolbar is to gain more space on the sheet. The space that is used by the sheet header and toolbar become area that developers can use for additional filter panes and/or visualizations. Another benefit is developers can add custom capabilities to replace the Qlik Sense defaults. For example, a developer may want to create their own navigation buttons and have more control over the options that are available to the user. If the sheet(s) are being used to create a PowerPoint presentation, removing the sheet header and toolbar makes the presentation look more polished.
Now let’s discuss some things to consider when removing the sheet header. If the sheet header is removed, alternative sheet navigation should be provided for the user. It is possible to use your keyboard to navigate the sheets, but many people do not know that so custom navigation should be created by the developer using buttons or links. In the image below, buttons are used.
In the image below, buttons are used again but the highlighted button indicates the sheet the user is on. So, in this example, the developer has replaced the sheet navigation and the sheet title that was included in the removed sheet header.
A sheet title can also be added to a sheet using a Text & image object. The custom navigation can be designed to match a theme or company brand which gives the developer a lot of flexibility and can give a company’s apps a consistence look and feel.
When the toolbar is toggled off, features are hidden but they are not removed from the app entirely. This is great but not all users may be aware of alternative ways to access the features on the toolbar, so it is important to keep this in mind. For example, users can still create notes for a visualization or view notes for a visualization by right-clicking on a chart, selecting the eclipse (…) and then selecting Notes. Another example is users can still access bookmarks or the sheets in an app via the App Overview. Users can still ask questions via Insight Advisor, so not functionality is loss with the removal of the toolbar. Other things to consider is that while selections can still be made via filter panes and visualizations, without the selection bar, users may not be aware that selections have been made. This is why the developer needs to make sure there are filter panes or some way for users to know what has been selected. When it comes to selections, buttons can also be used to perform actions such as clearing selections and making selections in a field.
The overall goal is not to make things harder for the user so knowing possible issues and designing for them is smart. While there are benefits in toggling off the sheet header and/or toolbar, developers must consider how this may impact their users and how their users will use the app. The user experience can be just as good with the sheet header and toolbar toggled off if the developer plans well for an intuitive user experience.
Thanks,
Jennell
...View More
Ever found yourself stuck with a messy pile of data that seems more like a labyrinth than a pathway to clean insights? You're not alone. Today, we're diving into the world of data cleaning in Qlik Sense to help you uncover the analytical potential hiding behind your data.
The Importance of Data Cleaning:
Imagine you're baking a cake. Would you eyeball the measurements of your ingredients? Probably not, unless you fancy a disaster cake. Just like one poorly measured cup of flour can ruin your entire recipe, a small data error can throw off your whole analysis. That's why, before you dive into the fun part—data analysis—you've got to make sure your key ingredient (data) is as clean and precise as possible.
Why Data Cleaning is More than Just a Chore:
It's not just about tidying up; it's about quality control. Skipped steps or overlooked errors can lead to inaccurate results that could misinform your business decisions.
Data Accuracy:The accuracy of your analytics depends heavily on your data's quality. Data cleaning helps to weed out errors and inconsistencies, ensuring your insights are both trustworthy and actionable. Tools like mapping tables or functions like SubField can be invaluable in this stage.
Data Consistency:Inconsistent data formats or naming conventions can be a real roadblock. Qlik Sense offers features like the SubFieldfunction and mapping tables to help you standardize data for consistent reporting and visualization.
Data Integration:When you're integrating data from various sources, alignment is crucial. Qlik Sense provides numerous functions that help in aligning these disparate datasets into a cohesive, unified form.
Enhanced Visualization and Performance:Clean data doesn't just make your visualizations more meaningful; it also enhances the performance of your Qlik applications. Expect faster data retrieval and more efficient analysis when your data is in good shape.
Data Cleaning techniques in Qlik Sense:
Duplicates removal:Duplicate records can distort your analysis and reporting. Qlik offers built-in functions like Keep when loading tables or the DISTINCT keyword in your script to load only unique rows.
Missing values:You can address missing values by removing records or filling in gaps based on specific criteria. Functions like IsNull, IsNullCount, and NullAsValue come in handy.
Data formatting:Using the numerous string functions available in Qlik Sense, you can standardize data values to a consistent format. For example, the Upper, Lower, Date, and Num functions can be used to unify text or dates.
Data manipulation:Sometimes the data you import into Qlik Sense doesn’t exactly fit your needs. Qlik offers ways to reshape your data accordingly. For instance inconsistent field values can often occur when pulling data from multiple tables and this inconsistency can disrupt the connections between data sets. An efficient solution to this is to use Mapping tables.
Mapping Tables:
These types of tables behave differently than other tables in that they are stored in a separate area of the memory and are strictly used as mapping tables when the script is run, they are then automatically dropped.
Let’s take a look at how to do this and the different statements and functions that can be used:
MAPPING prefixThis is used to create a mapping table. For instance:
CountryMap:
MAPPING LOAD * INLINE [
Country, NewCountry
U.S.A., US
U.S., US
United States, US
United States of America, US
];
Keep in mind that a mapping table must have two columns, the first containing the comparison values and the second contains the desired mapping values.
ApplyMap()
The ApplyMap function is used to replace data in a field based on a previously created Mapping Table.
CountryMap:
MAPPING LOAD * INLINE [
Country, NewCountry
U.S.A., US
U.S., US
United States, US
United States of America, US
];
Data:
LOAD
ID,
Name,
ApplyMap('CountryMap', Country) as Country,
Code
FROM [lib://DataFiles/Data.xlsx]
(ooxml, embedded labels, table is Sheet1);
The first parameter in ApplyMap is the Mapping Table name in quotes. The second parameter is the field containing the data that needs to be mapped.You can add a third parameter to the ApplyMap function that serves as a default to handle cases when the value doesn’t match one in the Mapping Table. For instance:ApplyMap('CountryMap', Country, 'Rest of the world') As Country
after mapping:
MapSubstring()The MapSubstring function is used to map parts of a field, this can be used as an alternative to Replace() or PurgeChar() functions. For instance, let’s clean up these phone number values from unwanted characters:
ReplaceMap:
MAPPING LOAD * INLINE [
char, replace
")", ""
"(", ""
"\"", ""
"/", ""
"-", ""
] (delimiter is ',');
TestData:
LOAD
DataField as data,
MapSubString('ReplaceMap', DataField) as ReplacedString
INLINE [
DataField
"(415)555-1234",
"(415)543,4321",
"“510”123-4567",
"/925/999/4567"
] (delimiter is ',');
after cleaning:
MAP … USINGThe Map…Using statement works differently than the ApplyMap() function in that ApplyMap does mapping every time the field name is encountered, whereas Map… Using does mapping when the values is stored under the field name in the internal table.For instance, in the following load script, the Mapping will be applied to the Country field in Data1, however it will not be applied to Country2 field in Data2 table.That’s because Map… USING statement is only applied to the field named Country. But in Data2, the field is stored as Country2 in the internal table.
Map Country Using CountryMap;
Data1:
LOAD
ID,
Name,
Country
FROM [lib://DataFiles/Data.xlsx]
(ooxml, embedded labels, table is Sheet1);
Data2:
LOAD
ID,
Country as Country2
FROM [lib://DataFiles/Data.xlsx]
(ooxml, embedded labels, table is Sheet1);
UNMAP;
Useful functions for data cleaning
SubField()Used to extract substrings from a string field that consists of two or more parts separated by a delimeter.The arguments it takes are a Text (original string), a delimiter (character within the input text that devides the string into parts), and field_no that’s either 1 to return the first substring (left) or 2 to return the second substring (right))SubField(text, delimiter, field_no)For instance:
UserData:
LOAD * INLINE [
UserID, FullName
1, "John,Doe"
2, "Jane,Doe"
3, "Alice,Wonderland"
4, "Bob,Builder"
];
CleanedData:
LOAD
UserID,
SubField(FullName, ',', 1) as FirstName,
SubField(FullName, ',', 2) as LastName
RESIDENT UserData;
Drop Table UserData;
Len()Returns the length of the input string
Left()Returns a string of the first (left) characters of the input string, where the number of characters is determined by the second parameter.Left(text, count)
Right() Similar to left, it returns a string of the last (rightmost) characters of the input string. The second parameter determines the number of characters to be returned.
Index() The index function searches a string and returns the starting position of the nth occurrence of a provided substring. For instance:Index(‘qwerty’, ‘ty’) will return 5Index(‘qwertywy’, ‘w’, 2) will return the second occurrence of ‘w’, i.e: 7
Example 1: Using a combination of the functions above to clean up a field. Let’s take a more complex field and try to extract the first name and last name.
UserData:
LOAD * INLINE [
UserID, Object
1, "37642UI101John.Doe"
2, "98322UI101Jane.Doe"
3, "45432UI101Alice.Wonderland"
4, "32642UI101Bob.Builder"
];
CleanedData:
LOAD
UserID,
SubField(Right(Object, Len(Object) - Index(Object, 'UI101') - 4), '.', 1) as FirstName,
SubField(Right(Object, Len(Object) - Index(Object, 'UI101') - 4), '.', 2) as LastName
RESIDENT UserData;
Drop Table UserData;
after cleaning:
Example 2: Cleaning HTML in a field
Paragraphs:
LOAD * INLINE [
Paragraph_ID, Paragraph
1, "<p>This is a <strong>paragraph</strong>.</p><br><p>This is another <em>paragraph</em>.</p>"
];
// Loop through each paragrpah in the Paragraphs table
For vRow = 1 to NoOfRows('Paragraphs')
Let vID = Peek('Paragraph_ID', vRow-1, 'Paragraphs'); // Get the ID of the next record to parse
Let vtext = Peek('Paragraph', vRow-1, 'Paragraphs'); // Get the original paragraph of the next record
// Loop through each paragraph in place
Do While len(TextBetween(vtext, '<', '>')) > 0
vtext = Replace(vtext, '<br>', chr(10)); // Replace line breaks with carriage returns - improves legibility
vtext = Replace(vtext, '<' & TextBetween(vtext, '<', '>') & '>', ''); // Find groups with <> and replace them with ''
Loop;
// Store the cleaned paragraphs into a temporary table
Temp:
Load
$(vID) as Paragraph_ID,
'$(vtext)' as cleanParagraph
AutoGenerate 1;
Next vRow;
// Join the cleaned paragraphs back into the original Paragraphs table
Left Join (Paragraphs)
Load *
Resident Temp;
// Drop the temporary table
Drop Table Temp;
after cleaning:
I hope you found this post helpful!Attached you can find a QVD that contains the scripts used in the post.
Happy data cleaning!
...View More
Conditional show or hide is available in line and bar charts giving the user the ability to toggle dimensions or measures on or off in a single chart. This allows developers to customize line and bar charts and save space by using one chart to show various metrics and dimensions. Let’s look at a simple way of using this feature to show or hide lines in a line chart. In the Overall Equipment Efficiency demo found on the Demo Site, there is a line chart accompanied by buttons that are used to toggle the lines on and off in the line chart.
This is done by using variables. When each button is clicked, the respective variable is toggled from 0 to 1 or 1 to 0 depending on its current value. See the value expression in the image below.
In the measure expression in the line chart, this variable is checked to determine if the expression should be evaluated and displayed or if the measure should be set to null.
This is a perfectly good way to toggle the lines, but with the ability to use conditional show and hide in line and bar charts, this process can be simplified. First, in the measure expression, we no longer need to use an if statement which can help reduce calculation time. We can simply use our normal expression and the “Show measure if” setting, with the respective variable, to evaluate if a line should be shown in the visualization or not.
The “Show measure if” and “Show dimension if” settings evaluate the expression and will show the line if the expression evaluates to true. In my example, vShowOEE will be either 1 or 0. If it is 1, the line will be displayed. If it is 0, then it will not be displayed. We can continue to use the buttons to toggle the respective variable (from 1 to 0 and vice versa) for each line.
My example is basic, but more complex expressions can be used as well. For example, you may want to show/hide lines based on a selection or a calculated value or you may want to use some business logic to determine which dimension or measure should be displayed. The expression can be as simple or complex as needed, as long as it returns a true or false value. Keep in mind, that this show setting is optional and can be left blank. When no expression is entered, the line (or bar) is displayed.
There are a few limitations of this new feature to be aware of: 1) Custom tooltips are disabled when using a conditional dimension, 2) Time series forecasting is not available when using conditional dimensions or measures. While the “Show measure if” and the “Show dimension if” can both be used in the same chart, it is recommended that you use only one at a time. Check out Qlik Help to learn more and test this new feature out in your next line or bar chart.
Thanks,
Jennell
...View More
In today's data-driven world, where every byte of information matters, integrating artificial intelligence with analytical tools opens up a new realm of possibilities. Following Qlik’s annoucement of its new OpenAI Connector, we will be trying it out in this post and seeing different ways it can be used to seamlessly bring generative AI content to augmentyour Qlik data.
The OpenAI connector serves as a bridge between your Qlik Sense apps and OpenAI's robust generative AI models, such as the ones powering ChatGPT.
With it, you can bring a new level of contextual understanding and analytical depth to your applications, enhancing the way you comprehend and utilize your data.
In the upcoming sections of this post, we will start by taking a look at how you can directly tap into OpenAI’s completion API using the simple REST connector, then we will jump into how to do the same in a much simpler way using the new OpenAI Analytics connector.
Before you start, you need to:
Sign up for an OpenAI account: https://platform.openai.com/
Create a new API key
In Qlik Cloud Managment console, make sure to enable “Learning endpoints” under “Feature control” in the Settings section:
1/ Using the REST connector to call OpenAI’s completion API:
First, let’s prepare our data. After loading our customer reviews table, we need to prepare both our data and prompt so that we can send it as part of the request body to the completion API endpoint.
You can view all the details on the documentation: https://platform.openai.com/docs/api-reference/completions/create
But we basically need to first convert our data into JSON format, concatenate it with our prompt sentence, then inject it into the request body.
[Reviews]:
First 25
LOAD
review_id,
product_id,
product_name,
customer_id,
customer_name,
review_title,
review_text,
review_date,
verified_purchase,
recommend_product
FROM [lib://DataFiles/reviews-data.csv]
(txt, codepage is 28591, embedded labels, delimiter is ',', msq);
// turn data into json:
[InputField_JSON]:
Load
'Reviews: [' & Concat( '{review_date: ' & review_date & ', review_title: ' & review_title & ', review_text: ' & review_text & ', product_name: ' & product_name & ', customer_name: ' & customer_name & ', recommend_product: ' & recommend_product & ' }', ',') & ']' as json
RESIDENT Reviews;
LET vDataInput = Peek('json');
// construct prompt
LET prompt = 'You are a data analyst, you will summarize the following data into the top products based on reviews and give the names of customers who made negative comments.';
TRACE vprompt = '$(prompt)';
// construct request body for openAI request
LET requestBody = '{
"model": "text-davinci-003",
"prompt": "$(prompt) Source: $(vDataInput)",
"max_tokens": 2048,
"temperature": 0
}';
LET requestBody = Replace(requestBody, '"', Chr(34)&Chr(34));
TRACE vRequestBody = '$(requestBody)';
In the above script, we turned the Reviews data into a JSON-formatted string and stored it in the “vDataInput” variable, we then created the prompt phrase asking the model to summarize the data and return top products based on the reviews as well as the names of customers who gave bad reviews.
Finally, we constructed the request body for the POST request that will be used in the REST API call below.
Notice that the model chosen is text-davinci-003, the prompt is the combination of our 2 variables, and max_tokens is set to a higher number to allow for a bigger response.
Next, we create the REST API connection and make sure to to check “Allow With Connection”:
Finally, Save the newly created connection, then click on “Select Data” under the connection name, choose Root and insert Script to get the following:(P.S: I have edited the generated script to add the “WITH CONNECTION statement injecting the requestBody variable as the Body of the request”):
LIB CONNECT TO 'REST OpenAI';
RestConnectorMasterTable:
SQL SELECT
"id",
"object",
"created",
"model",
"__KEY_root",
(SELECT
"text",
"index",
"logprobs",
"finish_reason",
"__FK_choices"
FROM "choices" FK "__FK_choices"),
(SELECT
"prompt_tokens",
"completion_tokens",
"total_tokens",
"__FK_usage"
FROM "usage" FK "__FK_usage")
FROM JSON (wrap on) "root" PK "__KEY_root"
WITH CONNECTION
(BODY "$(requestBody)" );
[OpenAI Response]:
LOAD
[text] AS response_openai,
[finish_reason],
[__FK_choices] AS [__KEY_root]
RESIDENT RestConnectorMasterTable
WHERE NOT IsNull([__FK_choices]);
JOIN([OpenAI Response])
LOAD
[prompt_tokens],
[completion_tokens],
[total_tokens],
[__FK_usage] AS [__KEY_root]
RESIDENT RestConnectorMasterTable
WHERE NOT IsNull([__FK_usage]);
DROP TABLE RestConnectorMasterTable;
Once we hit “Load Data”, we can jump to a sheet to view the response:
Great, after inserting the “response_openai” field into a Text & Image object, we can see that the model returned the answers accuretly!
Keep in mind that you can use this method in Qlik Enterprise as well.
2/ Using the new OpenAI Connector:
There are 2 different configurations of this connector to send data to the endpoint service:
OpenAI Completions (GPT-3) and OpenAI Chat Completions (GPT-3.5, GPT-4) – Rows:
This will send each row of data as a question to the completion api and each response will be stored as text in a table with the same number of rows as the input.
OpenAI Completions (GPT-3) – JSONTables: This will send a request on each row, where the response is expected to be a JSON list of data. The connector will convert the JSON table into a table of data in the Qlik Data Model.
2.1/ Rows:
First, we load the Reviews table into Qlik and include a new field for the prompt, where we ask the model to suggest actions to be taken in order to improve sales based on the reviews customers have left for each product.
[Reviews]:
First 25
LOAD
review_id,
product_id,
product_name,
customer_id,
customer_name,
review_title,
review_text,
review_date,
verified_purchase,
recommend_product,
'Based on the review with title: '& review_title & ' about product ' & product_name & ' customer "' & customer_name & '" said ' & review_text & ', what action would you suggest based on this feedback to improve sales?' as prompt
FROM [lib://DataFiles/reviews-data.csv]
(txt, codepage is 28591, embedded labels, delimiter is ',', msq);
Then, we create a new OpenAI Data Connection:
Select OpenAI Completions (GPT-3) - Rows, insert your API Key, and make sure to increase the Max Tokens to a higher number to allow for a bigger response from the API.
We set the association field to “review_id” to connect the OpenAI generated data with our Reviews table.
We then use the Select Data wizard and enter “Reviews” as the Resident Table.
The Data Field is the prompt field we previously added to our table.
With that inserted, this is how the complete load script should look like:
[Reviews]:
First 25
LOAD
review_id,
product_id,
product_name,
customer_id,
customer_name,
review_title,
review_text,
review_date,
verified_purchase,
recommend_product,
'Based on the review with title: '& review_title & ' about product ' & product_name & ' customer "' & customer_name & '" said ' & review_text & ', what action would you suggest based on this feedback to improve sales?' as prompt
FROM [lib://DataFiles/reviews-data.csv]
(txt, codepage is 28591, embedded labels, delimiter is ',', msq);
[openai]:
LOAD
[id],
[object],
[created],
[model],
[prompt_tokens],
[completion_tokens],
[total_tokens],
[choices.text],
[choices.index],
[choices.logprobs],
[choices.finish_reason],
[review_id]
EXTENSION endpoints.ScriptEval('{"RequestType":"endpoint", "endpoint":{"connectionname":"OpenAI"}}', Reviews{review_id,prompt});
Once the data is loaded, we can now inspect the data model and see the relation between the newly generated openai table and our Reviews table:
Finally, we create a sheet with our Reviews table fields in a table, and a Text & Image object where we put the [choices.text] field which contains the OpenAI response.
For instance:
when selecting a good Review:
when electing a bad Review:
Alternatively, you can call the connection directly from a chart and pass in the prompt.
For instance, here we add a Text & Image object, and add the ScriptAggrStr expression to ask the model to give a general sentiment on a product regarding its price and performance.
If(GetSelectedCount(product_name)>0,
endpoints.ScriptAggrStr('{"RequestType":"endpoint", "endpoint":{"connectionname":"OpenAI","column": "choices.text"}}', 'How do customers feel about ' & product_name & 'in regards to its price and performance')
,'Please select a product to see results')
2.2/ Rows with a JSON prompt
What if instead of sending each row data as a prompt to the completion API, we want to send the whole Reviews dataset and get a general insight based on that.
In this case we can re-use our data to JSON transformation we did in Section 1 of the post, and create and inline table with one row that contains our prompt along with this JSON formatted data, then simply use the “OpenAI Completions - Rows” config to get our response:
Here is how the load script looks like:
// Open AI - Rows call - JSON
[Reviews]:
First 25
LOAD
review_id,
product_id,
product_name,
customer_id,
customer_name,
review_title,
review_text,
review_date,
verified_purchase,
recommend_product
FROM [lib://DataFiles/reviews-data.csv]
(txt, codepage is 28591, embedded labels, delimiter is ',', msq);
// Turn Reviews Table into JSON
[InputField_JSON]:
Load
'" {"Reviews" : [' & Concat( '{"review_date": "' & review_date & '", "review_title": "' & review_title & '", "review_text": "' & review_text & '", "product_name": "' & product_name & '", "customer_name": "' & customer_name & '", "recommend_product": "' & recommend_product & '" }', ',') & '] }"' as json
RESIDENT Reviews;
LET vDataInput = Peek('json');
// Build Prompt text
SET vPrompt = 'Based on the following dataset, what insights can you retrieve, what strategies could I implement to improve customer satisfaction, and which products should I focus on to improve their sales. Dataset: ';
SET vText = $(vPrompt) $(vDataInput);
TRACE vdata = $(vText);
// Load into table
[TableWithData]:
LOAD
RowNo() as RowId,
'$(vText)' as Text
AUTOGENERATE 1;
[openai]:
LOAD * EXTENSION endpoints.ScriptEval('{"RequestType":"endpoint", "endpoint":{"connectionname":"OpenAIJSONTables"}}', TableWithData{RowId,Text});
Notice once the data is loaded that all our dataset is sent as part of the prompt.
Once we load the data, we add a Text & Image object and insert the [choices.text] field to view the AI generated response:
2.3/ JSON Tables
Lastly, let’s explore how the JSON Tables config of the OpenAI Connector works to return a table of data:
First, we create the connection and choose the appropriate configuration (make sure to increase the Max Tokens)
The laod script is simple, we have an inline table with a Text field containing our prompt, followed by the OpenAI load statement generated through the Select Data wizard.
SourceTable:
NoConcatenate
LOAD
RowNo() as RowId,
Text
Inline
[Text
top 10 Countries by Population. Extract as JSON list];
[openai]:
LOAD * EXTENSION endpoints.ScriptEval('{"RequestType":"endpoint", "endpoint":{"connectionname":"OpenAI TABLES"}}', SourceTable{RowId,Text});
Once the data is loaded, we can check the Data model and preview the openai table.
Notice that we now have a table with the top 10 countries by population. This data can be used to generate analytics content without refering again to openai.
Attached, you will find the QVF of the example app. Within the Data Load Editor, each example is broken into its own section that ends with an EXIT SCRIPT. You can drag the section to the top to only load that specific example.
More things to consider:
Using OpenAI connector inside Qlik Application Automation: https://community.qlik.com/t5/Official-Support-Articles/How-to-Getting-started-with-the-OpenAI-Connector-in-Qlik/ta-p/2077315
Useful Resources:
https://www.youtube.com/watch?v=4eNeGv69uag
https://www.youtube.com/watch?v=R9ScDzEU9DQ
https://www.youtube.com/watch?v=XCaaRenozb8&t=502s
https://www.youtube.com/watch?v=qfGWKXAAKNI
https://help.qlik.com/en-US/cloud-services/Subsystems/Hub/Content/Sense_Hub/LoadData/ac-openai-use.htm
I hope you found this post useful and you were able to get a better understanding on how the new OpenAI Connector works to bring generative AI content to augment your existing Qlik Sense applications.
Thank you for reading.
...View More
The straight table, included in the Visualization bundle, has all the table properties that we are used to as well as many new features. These new features make it easier for developers to create a straight table and it gives users the ability to customize straight tables in the apps they view. The straight table is ideal when you want to provide detailed data – the raw data. While you do not want to have too many columns (ten or less columns are ideal for the best performance), a straight table can have many columns (dimensions and measures).
As previously mentioned, the straight table can be added to a sheet from the Qlik Visualization bundle. This means developers will need to open the advanced options to add the straight table to their sheet(s) and make edits. Once the straight table is added to a sheet, developers can add columns – either fields and master items or custom expressions. One of the new features that developers can take advantage of to build tables quickly is the ability to add more than one dimension and/or measure at once. Simply select the dimensions and measures you would like to add to the table and then click the Add button.
Once columns are added to the table, they can be dragged as needed to the desired position/order. Developers also can add alternate columns to the straight table. These columns can be dimensions and/or measures. These alternates columns will be available to users to customize the straight table if chart exploration is enabled. This is a great new feature because the user does not need edit permissions to modify the straight table. Users can add and/or remove columns based on their analysis. Being able to add columns as needed also improves performance since the straight table does not need to display all the columns, all the time. Loading the straight table with the minimum columns needed will decrease the load time.
Chart exploration allows users, who are in analysis mode, to add and remove columns from the straight table they are viewing by checking or unchecking them in the Chart exploration panel (see image below). Any users viewing the sheet can customize the straight table. Users cannot see layout changes made by other users using the app, unless they opt to share the visualization or create a public bookmark with the layout saved.
Another new feature for developers is the ability to set the column width. By default, the column width is set to Auto, but developers can set it to Fix to content, Pixels or Percentage. Pagination is another new feature that can be enabled in a Straight table. With pagination, a specified number of rows are displayed at once and the user can navigate through the pages using arrows or selecting the page.
Many of the properties for the straight table are familiar but the new ones are moving the straight table to a new level. Learn everything you need to know about the straight table in Qlik Help and add one to your next app. Also check out the SaaS in 60 video for a quick video overview:
Thanks,
Jennell
...View More
In my previous blog posts (part 1, part 2), I explained how we can use enigma.js to communicate with the Qlik Associative Engine and get access to data. We also went through the concept of Generic Objects and saw how they can be used to do many things including getting raw data and using it to build visualizations.
In this post, we are going to expand on that and take a look at a real world example where enigma.js can be used to get Master Measures data that is rendered as KPIs in a web app, and monitor them for any changes to reflect the latest values.
This post is based on the following tutorial on qlik.dev where you can find the boilerplate code and more resources to help you get started: https://qlik.dev/embed/control-the-experience/dimensions-and-measures/get-master-measures/
You will find the full example code attached at the end of the post, I recommend you download it and open it in your favorite text editor as I will only feature some parts of it to keep this post short.
1- First, let’s take a look at the index.html:
We include the enigma.js library.
We define the configuration options to connect to our Qlik Cloud tenant and other needed variables including:
the tenant URL
the Web Integration ID (you can learn more about how to create this here)
The App ID
and a list containing the names of the Master Measures we wish to access.
const TENANT = '<INSERT YOUR TENANT HERE (example: xxxx.us.qlikcloud.com)>';
const WEB_INTEGRATION_ID = '<INSERT WEB INTEGRATION ID HERE>';
const APP_ID = '<INSERT APP ID HERE>';
const MASTER_MEASURE_NAMES = ['# of Invoices', 'Average Sales per Invoice', 'Sales (LYTD)', 'Sales LY'];
const IDENTITY = '1234';
In the main function, we initiate the login process, get the csrf token, and open the Enigma session. Then we get all the Master Measures via the getMeasureList function, and render only the Master Measure data from the "MASTER_MEASURE_NAMES" list we previously defined.
All the functions are defined in scripts.js
(async function main() {
const isLoggedIn = await qlikLogin();
const qcsHeaders = await getQCSHeaders();
const [session, enigmaApp] = await getEnigmaSessionAndApp(qcsHeaders, APP_ID, IDENTITY);
handleDisconnect(session);
const allMasterMeasuresList = await getMeasureList(enigmaApp);
const masterMeasureValuesDct = await masterMeasureHypercubeValues(enigmaApp, allMasterMeasuresList, MASTER_MEASURE_NAMES);
})();
2- Now, let’s take a look at the different functions that make this happen:
Login and session handling:
the qlikLogin function checks to see if you are login by fetching the /api/v1/users/me api endpoint, if not it redirects to the Interactive idp login page.
getQCSHeaders fetches the CSRF token needed to make the websocket connection to the Qlik Engine.
// LOGIN
async function qlikLogin() {
const loggedIn = await fetch(`https://${TENANT}/api/v1/users/me`, {
mode: 'cors',
credentials: 'include',
headers: {
'qlik-web-integration-id': WEB_INTEGRATION_ID,
},
})
if (loggedIn.status !== 200) {
if (sessionStorage.getItem('tryQlikAuth') === null) {
sessionStorage.setItem('tryQlikAuth', 1);
window.location = `https://${TENANT}/login?qlik-web-integration-id=${WEB_INTEGRATION_ID}&returnto=${location.href}`;
return await new Promise(resolve => setTimeout(resolve, 10000)); // prevents further code execution
} else {
sessionStorage.removeItem('tryQlikAuth');
const message = 'Third-party cookies are not enabled in your browser settings and/or browser mode.';
alert(message);
throw new Error(message);
}
}
sessionStorage.removeItem('tryQlikAuth');
console.log('Logged in!');
return true;
}
// QCS HEADERS
async function getQCSHeaders() {
const response = await fetch(`https://${TENANT}/api/v1/csrf-token`, {
mode: 'cors',
credentials: 'include',
headers: {
'qlik-web-integration-id': WEB_INTEGRATION_ID
},
})
const csrfToken = new Map(response.headers).get('qlik-csrf-token');
return {
'qlik-web-integration-id': WEB_INTEGRATION_ID,
'qlik-csrf-token': csrfToken,
};
}
Enigma session connection:
we use enigma.create() function to establish the websocket connection and create a new QIX session.
we use openDoc() method of the global object to open our app, and then return it for later use.
// ENIGMA ENGINE CONNECTION
async function getEnigmaSessionAndApp(qcsHeaders, appId, identity) {
const params = Object.keys(qcsHeaders)
.map((key) => `${key}=${qcsHeaders[key]}`)
.join('&');
return (async () => {
const schema = await (await fetch('https://unpkg.com/enigma.js@2.7.0/schemas/12.612.0.json')).json();
try {
return await createEnigmaAppSession(schema, appId, identity, params);
}
catch {
const waitSecond = await new Promise(resolve => setTimeout(resolve, 1500));
try {
return await createEnigmaAppSession(schema, appId, identity, params);
}
catch (e) {
throw new Error(e);
}
}
})();
}
async function createEnigmaAppSession(schema, appId, identity, params) {
const session = enigma.create({
schema,
url: `wss://${TENANT}/app/${appId}/identity/${identity}?${params}`
});
const enigmaGlobal = await session.open();
const enigmaApp = await enigmaGlobal.openDoc(appId);
return [session, enigmaApp];
}
Get a list of all master measures in our app:Now that we have the enigma app object, we can use the createSessionObject method to create a session object by passing the qMeasureListDef definition with qType “measure”
// GET LIST OF ALL MASTER MEASURES
async function getMeasureList(enigmaApp) {
const measureListProp = {
"qInfo": {
"qType": "MeasureList",
"qId": ""
},
"qMeasureListDef": {
"qType": "measure",
"qData": {
"title": "/qMetaDef/title",
"tags": "/qMetaDef/tags"
}
}
}
const measureListObj = await enigmaApp.createSessionObject(measureListProp);
const measureList = await measureListObj.getLayout();
return measureList.qMeasureList.qItems;
}
Get data from our list of Master Measures:
Now, we loop through the list of all master measures returned from the function above and only grab the ones matching our list of matching measures from the MASTER_MEASURE_NAMES variable defined in index.html.
We then create a generic object based on the Hypercube definition that includes the matchingMeasures representing the measureObjects’ qIds.
Finally, we listen to any changes using the .on(”changed” …) event listener and grab the latest layout.
// CREATE HYPERCUBE WITH MULTIPLE MASTER MEASURES (INCLUDE MATCHING NAMES ONLY)
async function masterMeasureHypercubeValues(enigmaApp, allMasterMeasuresList, desiredMasterMeasureNamesList) {
let matchingMeasures = [];
allMasterMeasuresList.forEach(measureObject => {
if (desiredMasterMeasureNamesList.includes(measureObject.qMeta.title)) {
matchingMeasures.push({
"qLibraryId": measureObject.qInfo.qId
})
}
});
if (!matchingMeasures.length > 0) {
console.log('No matching master measures found! Exiting...');
return
}
const measureDef = {
"qInfo": {
"qType": 'hypercube',
},
"qHyperCubeDef": {
"qDimensions": [],
"qMeasures": matchingMeasures,
"qInitialDataFetch": [
{
"qHeight": 1,
"qWidth": matchingMeasures.length,
},
],
},
};
const measureObj = await enigmaApp.createSessionObject(measureDef);
const measureObjHypercube = (await measureObj.getLayout()).qHyperCube;
// LISTEN FOR CHANGES AND GET UPDATED LAYOUT
measureObj.on('changed', async () => {
const measureObjHypercube = (await measureObj.getLayout()).qHyperCube;
processAndPlotMeasureHypercube(measureObjHypercube);
})
processAndPlotMeasureHypercube(measureObjHypercube);
}
Render the data to the HTML as KPIs:Lastly, we retrieve the data in “hypercube.qDataPages[0].qMatrix” and loop through it to construct an easy manipulate array of key/value objects which are then injected into the HTML.
// HELPER FUNCTION TO PROCESS HYPERCUBE INTO USER FRIENDLY DICTIONARY
function processAndPlotMeasureHypercube(hypercube) {
const masterMeasureValuesDict = Object.create(null);
hypercube.qMeasureInfo.forEach((measure, i) => {
masterMeasureValuesDict[measure.qFallbackTitle] = hypercube.qDataPages[0].qMatrix[0][i].qText;
});
const masterMeasureKeys = Object.keys(masterMeasureValuesDict);
masterMeasureKeys.sort();
const sortedMasterMeasureValuesDict = Object.create(null);
masterMeasureKeys.forEach(name => {
sortedMasterMeasureValuesDict[name] = masterMeasureValuesDict[name];
})
renderKpis(sortedMasterMeasureValuesDict);
}
// RENDER KPIs
function renderKpis(masterMeasureValuesDict) {
let kpiData = [];
Object.entries(masterMeasureValuesDict).forEach(([key, value]) => {
kpiData.push({
label: key,
value: Number(value).toLocaleString()
});
});
const kpisContainer = document.querySelector('#kpis');
kpisContainer.innerHTML = '';
kpiData.forEach((kpi) => {
const kpiCard = document.createElement('div');
kpiCard.classList.add('kpi-card');
const labelElement = document.createElement('div');
labelElement.classList.add('kpi-label');
labelElement.innerText = kpi.label;
const valueElement = document.createElement('div');
valueElement.classList.add('kpi-value');
valueElement.innerText = kpi.value;
kpiCard.appendChild(labelElement);
kpiCard.appendChild(valueElement);
kpisContainer.appendChild(kpiCard);
});
}
This is how the KPIs are rendered to the page:
To show how the on-change event listener works, let's simulate a change by editing the # of Invoices Master Measure in Qlik:
Looking back the web page, the change is instantly reflected on the KPI:
That’s all, I hope you found this post helpful, do not forget to check out more tutorials on qlik.dev that cover other important use cases!
P.S: to make it easier to run the web app, I have included a server.py file to easily serve the files viahttps://localhost:8000. You can run it with the command: python server.py.
Also, don’t forget to whitelist this localhost domain when generating a new Web Integration ID:
Thanks for reading!
...View More
Let's see how it is possible to control sheet and object-level access in Qlik Cloud, specifically when organizations want to show/hide specific assets in an application based on the group membership of the current user that is accessing the application.