On a previous blog post, I shared a nifty tool that helps you introduce version control into your development workflow. I invite you to read about that and follow the gitoqlokand its latest updates.
In this post we are going to talk about a simpler, less cumbersome way to implement a simple version control system using Qlik Application Automation and the Github connector blocks. So, if you are someone who ends up with dozens of apps named v1, v2, v3, etc... then we're about to end that.
The setup is pretty simple, a Qlik Application Automation, a Github account, and we connect the two!
Let's start with with creating the Qlik Application Automation on your Qlik Cloud tenant. Head over to the Add New, then create a new Blank automation,
Once created, you'll be able to configure the Start block. Here, we are offered different ways to kick off the automation and in our case, we will use the WebHook run mode in order to hook into the Qlik Cloud Services.
We would like to run the automation once an App is published in our managed space. Take a look at the image below and change it to suit your needs.
Our next block is the Get Space block, you will find it under the Qlik Cloud Services section. This will allow us to find our "Git Demo Published" space and grab its ID. This is easy to do using the Do Lookup function.
Since Github doesn't allow space characters in Repository names, we will mitigate that by creating a variable which we will use to format the Space name into "git-demo-published". Notice on the image below that we use the Replace and Lowercase text functions.
Great, we now have our space information. Next, let's use the Get App Information block to grab the published app's id as shown below.
Our next step is to actually Export the App (think of this as right clicking on the app in the hub and exporting it).Alright, we now have a nicely formatted space name that will be used as our Repository name, and we have our "published" app exported and ready to go.
Our final step is to connect to Github. Qlik Application Automation makes this easy, just grab the Get Repository block under the Github section and drop it in. You will need to first authorize Qlik, so do this under the connection tab.In the Inputs tab, enter your username and drop in the Variable we previously created as the name of the Repo.Under Settings, make sure to change the On Error dropdown to "Ignore - Continue Automation and ignore errors". This will allow us to bypass a limitation set by Github for when a Repository is not yet created.
Next, using the convenient Condition Block, we will check to see if the returned value from the Get Repository block is empty (i.e, there is No repository under that name). If YES, we create it using the Add Repository block
After the condition is met (or if we already have the existing repo), we can now push our App to github.We will use the Create of Update File content Block to push the App qvf into the main branch. Take a look at the screenshot below to review the settings. Notice that we used the App name from the previous Get App Information block and appended it with the .qvf extension.
We're all done with the Automation. You can save it now and head over to your App to publish it and test.
Publishing the app will trigger the automation to run, and if we now head over to Github, we can see that a new Repository names "git-demo-published" (our formatted Space name) has been created, and a new commit was added with our first version of the App as a qvf.
After publishing a second time, we notice a new commit with the second version of the App.
By using this workflow, we can guarantee that our app versions are automatically stored for us to got back to if needed or to collaborate with other team members in a more efficient way.
I hope you found this blog post helpful, let me know if the comments below if you have any questions!
Ouadie
...View More
There have been new Qlik Sense features the past few months that I was excited to see. One of my favorites, which I am sure you are aware of, is the layout container. If you have not heard about it, check out Michael’s blog post here. Another favorite of mine is the copy and paste style feature which allows developers to copy and paste styles from one visualization to another. With all the styling options available in visualizations, being able to copy and paste styles saves time and eliminates the need to go into the styling properties of each visualization and set all the styling options you would like to use.
Some of the latest charts that have been upgraded to the new styling property panel are the Grid, Funnel and Sankey, giving users the ability to customize the title, subtitle, footnote, and background of a chart as well as the style the axes, labels, legends, and values in a visualization. Borders and shadows can be added to these visualizations as well.
The Filter Pane now has font styling for the title, subtitle, footnote, and header as well as font header and content styling. Like the charts above, borders and shadows can be added as well. Developers can change the background color or use an image in the background and control the color of the selected state. Here are a few examples.
With all these styling options, it is awesome that developers can now copy and paste the style from one visualization to another. If you remember the QlikView days, you know how valuable this feature is. Assume I have an app with a theme applied and have made some styling changes to the Margin KPI (see image below). If I want to apply the same styling changes to the Margin %, I have two options. One option is to open the styling properties of the Margin % KPI and make the same changes to the background color, border color and shadowing.
The second option and the easiest option is to right click on the Margin KPI while in Edit mode, click on the three dots (…) and select Copy style.
Then right click on the Margin % KPI, click on the three dots (…) and select Paste style.
That is a lot easier than styling the KPI manually, especially if there are multiple visualizations to update.
There are a few limitations to using the copy and paste style - you cannot copy and paste the style in a map or a master visualization. New styling properties for visualizations are being add all the time, giving users many options to style and make their app stand out. The copy and paste functionality is a great addition to Qlik Sense and makes designing and styling apps easier.
Thanks,
Jennell
...View More
Watch how I am serving our #QlikNation Community members, by not only analyzing their video suggestions BUT responding to them immediately from within this Qlik Cloud Analytics App. #mikedrop
We all know and love using Insight Advisor right within the Qlik Sense hub or inside Analytics apps, helping us analyze data, create visualizations or build data models.
In this post, we will tap into the Insight Advisor API to leverage its power within a separate web application.
We will create a simple web app that allows to ask natural language questions against our Qlik Sense app and get a recommended visualization as a response that we will then render using nebula.js
Pre-requisites:You will need to grab the following before starting:
Qlik Cloud tenant URL
Web Integration ID (you can get this from the Management console under Web, make sure to whitelist our localhost’s origin: http://localhost:1234)
App Id
InstallationRun npm install to install the content of package.json
Folder structure:
src
index.html (UI)
index.js (main file)
and cloud.engine.js (enigma.js library for engine session handling)
The following sections discuss the main parts of building the web app and calling the API, they are not in any particular order. I will provide the complete code for the project at the end of the post so you can see where everything fits.
1. Connecting to Qlik Cloud
First things first, we need to handle the authentication to Qlik Cloud.
Interactive login process:
async function getQCSHeaders() {
await qlikLogin(); // enforce tenant login
const response = await fetch(`${tenantUrl}/api/v1/csrf-token`, {
mode: 'cors',
credentials: 'include',
headers: {
'qlik-web-integration-id': webIntegrationId,
},
});
const csrfToken = new Map(response.headers).get('qlik-csrf-token');
return {
'qlik-web-integration-id': webIntegrationId,
'qlik-csrf-token': csrfToken,
};
}
async function qlikLogin() {
const loggedIn = await fetch(`${tenantUrl}/api/v1/users/me`, {
mode: 'cors',
credentials: 'include',
headers: {
'qlik-web-integration-id': webIntegrationId,
},
});
if (loggedIn.status !== 200) {
if (sessionStorage.getItem('tryQlikAuth') === null) {
sessionStorage.setItem('tryQlikAuth', 1);
window.location = `${tenantUrl}/login?qlik-web-integration-id=${webIntegrationId}&returnto=${location.href}`;
return await new Promise((resolve) => setTimeout(resolve, 10000)); // prevents further code execution
} else {
sessionStorage.removeItem('tryQlikAuth');
const message = 'Third-party cookies are not enabled in your browser settings and/or browser mode.';
alert(message);
throw new Error(message);
}
}
sessionStorage.removeItem('tryQlikAuth');
console.log('Logged in!');
return true;
}
2.Communicating with the Qlik Cloud Engine
(content of the cloud.engine.js file)
We need to open a session using enigma.js to communicate with the Qlik QIX engine.
import enigma from "enigma.js";
const schema = require("enigma.js/schemas/12.1306.0.json");
export default class EngineService {
constructor(engineUri) {
this.engineUri = engineUri;
}
openEngineSession(headers) {
const params = Object.keys(headers)
.map((key) => `${key}=${headers[key]}`)
.join("&");
const session = enigma.create({
schema,
url: `${this.engineUri}?${params}`,
});
session.on("traffic:sent", (data) => console.log("sent:", data));
session.on("traffic:received", (data) => console.log("received:", data));
return session;
}
async closeEngineSession(session) {
if (session) {
await session.close();
console.log("session closed");
}
}
async getOpenDoc(appId, headers) {
let session = this.openEngineSession(headers);
let global = await session.open();
let doc = await global.openDoc(appId);
return doc;
}
}
3. Including the Nebula Charts needed and rendering the recommended viz from Insight Advisor
When we eventually get back a recommendation from Insight Advisor, we will use a nebula object to embed it in our web app.
For a full list of available Nebula objects, visit: https://qlik.dev/embed/foundational-knowledge/visualizations/
We need to install “stardust” that contains the main embed function and all the nebula objects we need:
npm install @nebula.js/stardust
then install all objects needed
npm install @nebula/sn-scatter-plot
npm install @nebula/sn-bar-chart
etc...import { embed } from '@nebula.js/stardust';
import scatterplot from '@nebula/sn-scatter-plot';
etc...
Inside the rendering function, we will use stardust’s embed method to render the recommended chart type we get from Insight Advisor.
async function fetchRecommendationAndRenderChart(requestPayload) {
// fetch recommendations for text or metadata
const recommendations = await getRecommendation(requestPayload);
const engineUrl = `${tenantUrl.replace('https', 'wss')}/app/${appId}`;
// fetch rec options which has hypercubeDef
const recommendation = recommendations.data.recAnalyses[0];
// get csrf token
const qcsHeaders = await getQCSHeaders();
const engineService = new EngineService(engineUrl);
// get openDoc handle
const app = await engineService.getOpenDoc(appId, qcsHeaders);
await renderHypercubeDef(app, recommendation);
}
async function renderHypercubeDef(app, recommendation) {
const type = recommendation.chartType;
const nebbie = embed(app, {
types: [
{
name: type,
load: async () => charts[type],
},
],
});
document.querySelector('.curr-selections').innerHTML = '';
(await nebbie.selections()).mount(document.querySelector('.curr-selections'));
await nebbie.render({
type: type,
element: document.getElementById('chart'),
properties: { ...recommendation.options }
});
4. Calling the Insight Advisor API for recommendations
You can either call the API with a natural language question or a set of fields and master items with an optional target analysis.
Insight Advisor API endpoints that can be called:
api/v1/apps/{appId}/insight-analyses Returns information about supported analyses for the app's data model. Lists available analysis types, along with minimum and maximum number of dimensions, measures, and fields.
api/v1/apps/{appId}/insight-analyses/model Returns information about model used to make analysis recommendations. Lists all fields and master items in the logical model, along with an indication of the validity of the logical model if the default is not used.
api/v1/apps/{appId}/insight-analyses/actions/recommend Returns analysis recommendations in response to a natural language question, a set of fields and master items, or a set of fields and master items with an optional target analysis.
// Getting the recommendation
async function getRecommendation(requestPayload) {
await qlikLogin(); // make sure you are logged in to your tenant
// build url to execute recommendation call
const endpointUrl = `${tenantUrl}/api/v1/apps/${appId}/insight-analyses/actions/recommend`;
let data = {};
// generate request payload
if (requestPayload.text) {
data = JSON.stringify({
text: requestPayload.text,
});
} else if (requestPayload.fields || requestPayload.libItems) {
data = JSON.stringify({
fields: requestPayload.fields,
libItems: requestPayload.libItems,
targetAnalysis: { id: requestPayload.id },
});
}
const response = await fetch(endpointUrl, {
credentials: "include",
mode: "cors",
method: 'POST',
headers,
body: data,
});
const recommendationResponse = await response.json();
return recommendationResponse;
}
Results:
For the complete example that includes calling the API with fields, master items, and a target analysis type, visit qlik.dev post:https://qlik.dev/embed/gen-ai/build-insight-advisor-web-app/
The full code for this post can be found here:https://github.com/ouadie-limouni/insight-advisor-apiMake sure to change the variables in index.js.
I hope you find this post helpful, please let me know if you have any question in the comment section below!Ouadie
...View More
The purpose of this blog is to provide users with a few potential use cases for the ‘Alternative States’ feature within Qlik sense. For a full introduction and explanation of the feature please see Ouadie Limouni’s blog on the subject here.
Concatenate is a prefix that can be used when loading a table in the script. Using concatenate explicitly states that you want the data that is currently being loaded to be appended to the end of a specified table. According to Qlik Help, the syntax looks like this:
Concatenate is often used when different sets of data, often from different data sources, need to be added to the same table such as a fact table. I often use concatenate when adding new data to a link table in my data model. This is an example of explicitly using concatenate to append rows to an existing table. If the data sets do not have the same data structure, the concatenate prefix must be used, otherwise a synthetic table will be created and the data sets will be store in separate tables. In the script below, the People table is loaded with two fields, Name and Title. The second data set, starting on line 8, is loaded using the concatenate prefix because the field Department does not already exist in the People table. By concatenating the table, the Department field will be added to the People table and the data in the second table will be appended to the end of the People table. The third table, starting on line 15, is implicitly added to the People table and does not require the concatenate prefix because it has the same three fields, Name, Title, and Department) as the new People table.
Below is the resulting table. Notice that the first two rows have null in the Department field because this data is from the initial data set that did not include the Department field.
Let’s look at another example of implicit concatenation. If a table is being loaded with the same fields as an existing table, the data will be appended to the end of the existing table even though the concatenate prefix is not used. For example, below in the script on the left, the data from the table starting on line 8 would be appended to the end of the People table because the second table has the same fields as the People table. This can happen even if these two data sets are loaded in different parts of the script. They do not need to be loaded sequentially. The script below, on the right, using the concatenate prefix will produce the same results explicitly. I prefer to explicitly concatenate to avoid any confusion.
The resulting table will look like this:
In preparing this blog, I learned another way I could load multiple files with the same data structure taking advantage of implicit concatenation. The script below shows how I use the wildcard (*) to load several files with the same data structure.
What I learned is that I can also use a loop and implicit concatenation to do the same thing. After the script below runs, the TempData table will have all the data from the CSV files. The first time through the loop, the TempData table is created and subsequent times through the loop, the data is appended to the end of the now existing TempData table.
After 14 years with Qlik, I am still picking up new things. That is what makes my job so much fun!
Thanks,
Jennell
...View More
When we were considering how to best support our customers with managing their content in Qlik Cloud, we took a deep look at what we had and what it could become. We quickly realized that if we separated the needs customers have around security, organization, and process into distinct solutions, we could make the system more flexible and better suited to our customers.