In order to go beyond the default charts and custom object bundles that ship with Qlik Sense, you need to develop custom visualization extensions which will allow you to extend the capabilities of Qlik Sense using standard web technologies. In this post, we will cover how to leverage Qlik’s open source solution, Nebula.js, a collection of JavaScript libraries and APIs that make this task easy!Once the visualization is developed using Nebula.js, you can then bring it to Qlik Sense to be used in your Apps.Pre-requisites and project creationFirst things first, let’s make sure you have all you need to get started.Access to a terminal (we will need this to run nebula.js CLI commands)Node.js installed in your computer (v10 or higher)A text editor (VS code or similar)Web Integration ID (you can get this from the Management Console under “Web” on your Qlik Cloud Tenant. Make sure to put http://localhost:8000 in the Allowed Origins)Once you have the pre-requisites covered. Fire up your terminal and write the following command.npx @nebula.js/cli create streamchart --picasso minimalThis command uses Nebula.js CLI, a handy command line program that let’s us bootstrap our project easily.Notice that we added the –picasso flag after the project name with option minimal. This will add the necessary files to our project for Picasso.js, Qlik’s open source charting library that we will use to build the chart.Speaking of the chart, we are going to build a Stream Chart in this post to visualize Yearly sales by month.Once the cli command is finished running and the project is successfully created, change your directory into it using “cd streamchart” and then run it using “yarn run start”. Then open the folder in your text editor.Below is the folder structure of the project. All the work is done under src.Notice that the package.json file has the scripts we will be using to develop and build the project as well a script to generate the Qlik Sense ready files for the extension. It also lists all the libraries needed in our project, including Picasso.js, Nebula.js/stardust./srcindex.js - Main entry point of the visualizationobject-properties.js - Object properties stored in the apppic-definition.js – Picasso components definitiondata.js - Data configurationThe Development ServerAfter running the “yarn run start” command, the project will open in your browser at localhost:8000. This is the Local Development Server that we will use to test the visualization extension as we develop it.First step is to connect to the Qlik Engine. To do this, you need to plug in the Websocket URL in the following format:wss://yourtenant.us.qlikcloud.com?qlik-web-integration-id=yourwebintegrationidAfter we enter the WebSocket URL, we’re prompted to pick our Qlik Sense App that contains our data. Once the app picked, we move to the main edit screen of the local development server.The center section is where our visualization is rendered, and the right sidebar allows us to build up our qHyperCubeDef by selecting Dimensions and Measures.Remember:edits to the src/index.js that are related to the output, will lead to an auto refresh of the visualization allowing you see live changes in the center sectionConfiguring the Data StructureIn previous posts, I have covered details about building HyperCubes and configuring qHyperCubeDef, if you’re new to the concept, don’t hesitate to go through those posts first.PT 1 and PT2For the purposes of our Stream Chart visualization, we will rely on the automatic generation of the qHyperCubeDef once we choose our Dimensions and Measure on the right side bar.But if you wanted to make further changes to it, click on the Gear icon on the top-left of the center section to open the qHyperCubeDef object edit popup.Adding the Picasso.js components definitionSo far, nothing is rendered on the Dev Server UI. So, let’s go ahead and configure the Picasso.js Definition to create the necessary components.Under pic-definition.js, enter the following code:export default function ({
layout, // eslint-disable-line no-unused-vars
context, // eslint-disable-line no-unused-vars
}) {
return {
collections: [{
key: 'stacked',
data: {
extract: {
field: 'qDimensionInfo/0',
props: {
line: { field: 'qDimensionInfo/1' },
end: { field: 'qMeasureInfo/0' },
},
},
stack: {
stackKey: (d) => d.value,
value: (d) => d.end.value,
offset: 'silhouette',
order: 'insideout',
},
},
}],
scales: {
y: {
data: {
collection: {
key: 'stacked',
},
},
invert: false,
expand: 0.5,
},
t: {
data: {
extract: {
field: 'qDimensionInfo/0',
},
},
padding: 0.5,
},
l: {
data: {
extract: {
field: 'qMeasureInfo/0',
},
},
},
color: {
data: {
extract: {
field: 'qDimensionInfo/1',
},
},
type: 'color',
},
},
components: [
{
type: 'axis',
dock: 'bottom',
scale: 't',
},
{
type: 'axis',
dock: 'left',
scale: 'l',
},
{
key: 'lines',
type: 'line',
data: {
collection: 'stacked',
},
settings: {
coordinates: {
major: { scale: 't' },
minor0: { scale: 'y', ref: 'start' },
minor: { scale: 'y', ref: 'end' },
layerId: { ref: 'line' },
},
layers: {
curve: 'monotone',
line: {
show: false,
},
area: {
fill: { scale: 'color', ref: 'line' },
opacity: 1,
},
},
},
},
{
type: 'legend-cat',
scale: 'color',
key: 'legend',
dock: 'top',
settings: {
title: {
show: false,
},
layout: {
size: 2,
},
},
},
],
};
}Notice that the object contains:Collections: to stack our dataScalesComponents:Bottom axisLeft axisLines componentLegendFor more information about how to build charts with Picasso.js, visit https://qlik.dev/libraries-and-tools/picassojs and check out some of the previous blog posts.Go back to the Dev Server UI on the browser, and you should see the chart displayed.Package the Visualization Extension, and upload to Qlik SenseSo far, we’ve been working on our local dev environment. We need to generate the necessary files to build the project.In your terminal run: “yarn run build”. This will generate a “dist” folder containing our extension’s bundled files. You can use this to distribute the extension as an npm package.However, in order to use this visualization extension in Qlik Sense, we need additional files. Run “yarn run sense” which will create a new folder called “streamchart-ext”.Make sure to Zip this file in order to get it ready to be uploaded into your Qlik Cloud tenant.And there you go; you now have a visualization extension that you can use in your Qlik Sense apps!The full code is on github:https://github.com/ouadie-limouni/qlik-nebula-stream-chart-viz-extension
...View More
Derive value from analytical data at scale while the data landscape, use cases, and responses constantly change.
Data Mesh creates a foundation for deriving value from analytical data at scale while the data landscape, use cases, and responses are constantly changing. This is achieved by adhering to four underpinning principles.
The first is a domain-oriented, decentralized data ownership and architecture that allows the autonomous nodes on the mesh to grow.
Next, data-as-a-product is a unit of architecture that is built, deployed, and maintained.
Third, self-service data infrastructure to enable domain teams to autonomously create and consume data products.
Last, is federated governance and interoperability standards to aggregate and correlate independent data products within the mesh.
These principles combine to form a decentralized and distributed data mesh where domain data product owners leverage common data infrastructure via self-service, to develop pipelines that share data in a governed and open manner.
Invest in less expensive hardware and solve multi-layered, Lambda architecture redundancy by replaying data instead of maintaining two code bases (batch and speed layers) to process unique events continuously in real-time while meeting standard quality of service.
The Kappa architecture solves the redundant part of the Lambda architecture. It is designed with the idea of replaying data. Kappa architecture avoids maintaining two different code bases for the batch and speed layers. The key idea is to handle real-time data processing, and continuous data reprocessing using a single stream processing engine and avoid a multi-layered Lambda architecture while meeting the standard quality of service. The Kappa architecture is used with less expensive hardware to process unique events occurring continuously in real-time.
The Lambda architecture is used to reliably update the data lake as well as efficiently train machine learning models to predict upcoming events accurately. The architecture comprises a Batch Layer, Speed Layer (also known as the Stream layer), and Serving Layer. The batch layer operates on the complete data and thus allows the system to produce the most accurate results. However, the results come at the cost of high latency due to high computation time. The speed layer generates results in a low latency, near real-time fashion. The speed layer is used to compute the real-time views to complement the batch views. The Serving layer enables various queries of the results sent from the batch and speed layers.
Unify both data lake and data warehouse automation in one user interface to plan and execute either with ease.
The separation of storage and compute allows for each to be scaled up or down independently, blurring the lines between traditional data warehouses and data lakes. The separation also enables companies to architect a multi-modal lakehouse platform, which provides a single source of truth for all analytic initiatives – AI, BI, machine learning, streaming analytics, data science, and more. Qlik Compose facilitates both data lake and data warehouse automation in one unified user interface, enabling you to plan and execute either project with ease.
Realize faster return on data lake investments while confidently meeting growing demands for analytics-ready data sets in real time.
Qlik Data Integration (QDI) for Data Lake Creation helps enterprises realize a faster return on their data lake investment by continuously providing accurate, timely, and trusted transactional data sets for business analytics. Unlike other solutions, QDI for Data Lakes automates the entire data pipeline from real-time data ingestion to the creation and provisioning of analytics-ready datasets, eliminating the need for manual scripting. Data engineers can now meet growing demands for analytics-ready data sets in real-time with confidence.
Meet or exceed the demands for analytics-ready data marts that enable data-driven insights at the speed of change.
Qlik Data Integration (QDI) delivers on the promise of agile data warehousing with automation that allows users to quickly design, build, deploy, manage and catalog purpose-built data warehouses (especially cloud-based) faster than traditional solutions. Consequently, data engineers can meet or exceed the demands for analytics-ready data marts that enable data-driven insights at the speed of change.
Today I am going to blog about inner and outer set expressions. If you have ever used set analysis in your measure expressions, then you will like this new capability. Set analysis is a very powerful feature often used to define a scope that may differ from the scope that is defined by making selections in an app. For example, in the set expression below the sales are summed where the product line is camping equipment. This is considered an inner set expression and probably familiar to those who use set analysis. The set expression is in the aggregate function which is sum in this case.If this expression was written as an outer set expression, the set expression would be outside of the aggregate function as seen below. When using an outer set expression, it must be before the scope. In this example, both the inner and outer expressions return the same result.Now, where the outer set expression is helpful is when you have more than one aggregate function in your expression. For example, in the inner set expression below, there are three sum aggregate functions and in each one, set analysis is being used to set the scope to camping equipment.Using an outer set expression, this expression can be written like this:Notice that the set expression sits outside of the expression and at the beginning of the scope. Written this way, [Product Line]={'Camping Equipment'} is applied to all the aggregate functions. This is a cleaner way to write the expression and ensures that it is applied to all the aggregate functions. The outer set expression can also be used with a master measure. Assume I have master measures named Sales and Margin %. I can use outer set expressions like the ones below.A set expression, like the outer set expressions above, are applied to the entire expression. If the set expressions were in brackets, then the set expression applies only to the aggregate functions within the brackets. For example, the set expression below is in parentheses which means that it only applies to the aggregate functions within the parentheses and not to the aggregate function that sits outside of the parentheses. Written this way, the resulting value will differ from the set expression without any brackets/parentheses.A few things to remember about set expressions. Inner set expressions have precedence over outer set expressions and if the inner set expression has a set identifier, it replaces the context. Otherwise, the inner set expression is merged with the outer set expression. Check out Qlik Help for more examples and rules around inner and outer set expressions and try it for yourself in your next app.Thanks,Jennell
...View More
On Part 1 of this blog post, we went through Generic Objects, learned about definitions of the ListObject and Hypercube structures, and explored some of the settings that they offer in order to interact with data when communicating with the Qlik Associative Engine through Enigma.js.In this second part, we will see actual implementations of ListObjects and Hypercubes and learn how they can be used as part of your next web application to create filters and charts.Creating Filters with ListObjectsFirst, let’s create a filter that corresponds to a single field in our data model that we can use to make selections and filter in.The ListObject structure is best suited in this case since it contains one dimension. It lists all the values in a single field and provides metadata about the current state of each field value (either selected, excluded, or possible)In order to create a ListObject, we create a dynamic property for it in a generic object, we then add the appropriate JSON definition for a list object via the “qListObjectDef” property. The engine will know how to properly parse this definiton in order to produce a ListObject.In our case, we define a list object for our “Region” field by using the dimension definition based on the field name via the “qDef/qFieldDefs” property.All is left if to fetch the data, we do that by defining the “qInitialDataFetch” property to grab the initial data set. In our case, we have 1 column and we know that the number of rows to be pulled is less than 10. So, we define it with “qWidth” 1 and “qHeight” 10.{
"qInfo": {
"qType": "filter"
},
"qListObjectDef": {
"qDef": {
"qFieldDefs": ["Region"]
},
"qInitialDataFetch": [
{
"qLeft": 0,
"qWidth": 1,
"qTop": 0,
"qHeight": 10
}
]
}
}After connecting to enigma and getting our app object, we create a session object and pass it the ListObject definition above. A session object is a generic object that is only active for the current session and is not persisted in the model.const regionObj = await enigmaApp.createSessionObject(regionListDef);
const regionLayout = await regionObj.getLayout();
renderFilter(regionListElem, regionLayout, regionObj)After getting the ListObject layout, we call the function below that takes care of retrieving the data we want to display on our filter via the “layout.qListObject.qDataPages[0].qMatrix” which consists of an array of arrays, each corresponding to 1 row of data.The JSON object we get by looping through the "qMatrix" includes the following properties:qText: a text representation of the cell valueqNum: a numeric representation of the cell valueqElemNumber: a rank number of the cell value.qState: the selection state of the field value.We use both qText and qState in our front end. First to display the value name, and to add a CSS class that will allows to differentiate between different states:S for selectedX for excludedO for possibleWe also listen to click events on the list and call “genericObject.selectListObjectValues("/qListObjectDef", [e[0].qElemNumber], true)” which is a Generic Object method. It takes the path that describes where our ListObject is defined in our Generic Object as a 1st parameter, and the Element Number that we want to select as the 2nd parameter. The 3rd argument is the toggle mode (whether a selection is added to an already existing set of selections or overrides them).const renderFilter = (element, layout, genericObject) => {
var titleDiv = element.querySelector(".filter-title");
var ul = element.querySelector("ul");
ul.innerHTML = "";
// Get the data from the List Object
var data = layout.qListObject.qDataPages[0].qMatrix;
// Loop through the data and create the filter list
data.forEach(function(e) {
var li = document.createElement("li");
li.innerHTML = e[0].qText;
li.setAttribute("class", e[0].qState);
// Click function to select
li.addEventListener("click", function(evt) {
genericObject.selectListObjectValues("/qListObjectDef", [e[0].qElemNumber], true);
});
ul.appendChild(li);
});
};Creating Charts with HyperCubesWhen creating visualizations, we make use of Hypercubes which allow us to define a combination of both dimensions and measures in order to get a calculated data set.Let’s create a Pie Chart that shows the Sum of Revenues by Region.The Generic Object definition for this includes 1 Dimension and 1 Measure that we define via the “qHyperCubeDef” propertyWe then define the initial data fetch, in this case we need 2 columns (one for the Region, and one for the calculated Revenue) and we don’t expect to have more than 1000 rows. Thus we set “qWidth” 2 and “qHeight” 1000.{
"qInfo": {
"qType": "chart"
},
"qHyperCubeDef": {
"qDimensions": [
{
"qDef": {
"qFieldDefs": ["Region"],
"qSortCriterias": [
{
"qSortByNumeric": 1
}
]
},
"qNullSuppression": true
}
],
"qMeasures": [
{
"qDef":{
"qDef": "=Sum([Sales Quantity]*[Sales Price])"
}
}
],
"qInitialDataFetch": [
{
"qLeft": 0,
"qWidth": 2,
"qTop": 0,
"qHeight": 1000
}
]
}
}Similar to what we have done on the ListObject, we create a Generic Object (session object), and then get its layout. Next we call the “renderChart” method to create the Pie chart visualization.const chartObj = await enigmaApp.createSessionObject(chartDef);
const chartLayout = await chartObj.getLayout();
renderChart(chartLayout);Our function is simple, we start by accessing the qMatrix array which contains all of our rows which in turn contain a group of cells.We refine this array using the map function to only grab a pair of values consisting of the Region (via the qText property of the 1st cell) and the Revenue (via the qNum property of the 2nd cell).You can then render the chart using your visualization tool of choice. In this case, we use C3.js.const renderChart = (layout) => {
var qMatrix = layout.qHyperCube.qDataPages[0].qMatrix;
// Map through qMatrix to format it as array of arrays: [[region1, revenue1], [region2, revenue2] ...]
const columnsArray = qMatrix.map((arr) => [arr[0].qText, arr[1].qNum]);
c3.generate({
bindTo: "#chart",
data: {
columns: columnsArray,
type: 'donut'
},
donut: {
title: "Revenue by Region"
}
});
}I hope this post helps you further understand the notion of Generic Objects in the form of ListObjects and HyperCubes. Let me know how you are leveraging these concepts to build your custom solutions!The full code can be found on my Github Repo.
...View More