"What you will see me show in this video is how Qlik was able to address this challenge and leave room for flexibility. It is important to her that the formatting for each row adapt to the changing whims of her colleagues and that’s what I have shown here."
In an enterprise environment IT does not give access to the folders where the logs are stored, so developers have to ask to get the logs from IT which can be terribly slow and frustrating.
In the past few years, Qlik Sense has introduced a solid range of advanced analytics capabilities that compliments the Data Analytics platform. This includes using techniques such as Machine Learning, Natural Language Processing, etc. to help analysts/scientists explore data in a better way, get insights into any hidden patterns & take necessary actions. Bringing these methods together with the Data Analytics platform is often termed‘Augmented Intelligence’. For a more detailed description of the what’s & why’s, please refer tothislink.Problem:Consider the following scenario. An analyst needs to explore geographical data for a variety of neighborhoods in Toronto to help the city’s crime department set up some hotspots to monitor the neighborhoods and analyze the criminal activities in an effective way. Segregating the neighborhoods into five different clusters based on some kind of similarities between them would be the first step.Qlik Sense client approach:A solution to this from the Qlik Sense client perspective would be to use theKMeans2Dchart function that applies k-Means clustering internally and calculates the cluster_id for each of the neighborhoods. The results can then be visualized in the form of aScatter plotwith latitude on the X-axis, longitude on the Y-axis, and bubbles representing each neighborhood’s ID. Here’s a snippet that shows the data configuration.To visually explain the five clusters, the chart can be ‘colored by dimension’ based on the cluster_id’s. Please note that we use color by dimension (and use a dimension expression) and not color by expression. Here’s where we define our dimension expression.Expression:=pick(aggr(KMeans2D(vDistClusters,only(Lat),only(Long)),FID)+1, 'Cluster 1', 'Cluster 2', 'Cluster 3', 'Cluster 4','Cluster 5')Embedded Analytics approach:Now, consider that you are a full-stack Qlik developer and you leverage the open-sourced Nebula.js library to build your analytics portal. What would be an easy way for you to apply k-Means clustering to this dataset without relying on any 3rd-party libraries?Solution:Technically there are two ways to achieve this using Nebula.js.Develop the visualization in Qlik Sense client & embed it to your page.Develop a visualization on the fly & use the kMeans2D chart function.The first point is fairly simple & straightforward. You can create your Scatter plot in the Qlik Sense client and just call the <object_id> using Nebula’s render( ) function like this.n.render({
element,
id: '<ObjectID>',
});In this blog post, we will keep our focus on Point 2. i.e. we will develop a visualizationon the flyand then apply the clustering algorithm using the chart function to achieve our goal.Implementation:Before we start developing the visualization, let’s recap one of the amazing things about Nebula.js. It is a collection of JavaScript libraries, visualizations, and CLIs that helps developers build and integrate visualizations on top of Qlik's Associative Engine. Essentially it serves as a wrapper on top of the Engine allowing us to useexpressionsas we can inside the Qlik Sense client.Since Nebula sits on top of the Engine, we also have direct access to the data (dimensions & measures) from the Qlik Sense app that is connected to our Nebula app. This makes building an existing or custom visualization using Nebula easier & quicker from a data structure perspective. It also goes without saying that Qlik Sense‘associations’works seamlessly when using Nebula.Now, let’s quickly build an on-the-fly scatter plot and use thekMeans2Dchart function to color each point/bubble on our plot.Step 1: Define the qAttributeDimensions for the ‘dimension expression’.qAttributeDimensions: [
{
qDef:
"=pick(aggr(KMeans2D(vDistClusters, only(Lat), only(Long)), FID)+1, 'Cluster 1', 'Cluster 2', 'Cluster 3', 'Cluster 4', 'Cluster 5')",
qAttribute: true,
id: "colorByAlternative",
label: "Cluster id"
}
]
}
],Since we would like to use color by dimension as a coloring technique in our scatter plot, we have to explicitly specify it using id:"colorByAlternative" in the qDef as shown in the above code. An alternative to this would be id: "colorByExpression" if we wanted to color by expression.Also, we use a dimension expression (not just a dimension field) in the qDef property of qAttributeDimensions. Therefore, the dimension expression has to be defined inside the qDef property (as shown in the code).Step 2: Define the properties.Next, we define the required properties in the root of the object. An important property here is the color. We tell Nebula to not color our chart automatically first (code below) so we can override using our expression’s color. Also, we set have to set the right color mode inside the color. In our case, it is “byDimension”.The other crucial property within color is byDimDef. It is used to configure the settings for coloring the chart by dimension. byDimDef consists of three properties -type - either ‘expression’ | ‘libraryItem’key - if it is a ‘libraryItem’, then the libraryId | dimension expression if using an ‘expression’label - Label displayed for coloring (in the legend). Can be string | expressionThe code for defining the properties is below. properties: {
title: "k-Means clustering",
color: {
auto: false,
mode: "byDimension",
byDimDef: {
type: "expression",
key:
"=pick(aggr(KMeans2D(vDistClusters, only(Lat), only(Long)), FID)+1, 'Cluster 1', 'Cluster 2', 'Cluster 3', 'Cluster 4', 'Cluster 5')",
label: "Cluster id"
}
}
}That is pretty much it! To elucidate the clustering method, we also embed a scatter plot to our Nebula app with non-clustered data.Let us take a look at the two plots -Using the KMeans2D function the neighborhoods have now been segregated into five different clusters based on their similarity. The color by cluster helps us in interpreting the five groups distinctly.Before we end this post, let’s take a look at the clustering function once and understand the parameters.KMeans2D(num_clusters, coordinate_1, coordinate_2 [, norm])num_clusters - implies the no. of cluster we would like to have. Typically this value is calculated using the elbow curve or silhouette analysis methods. Read more here.coordinate_1, coordinate_2 - indicates the columns used by the clustering algorithm. These are both aggregations.norm - This is an optional normalization method applied to datasets before k-Means clustering. Possible values are - (0/‘none’ - no normalization, 1/ ‘zscore’ -zscore normalization, 2/‘minmax’ - min-max normalization).In our current Nebula application, we play around with the num_clusters parameter to understand the differences such as neighborhood overlaps, etc. We embed three action-buttons in our dashboard and the final result can be seen below.The complete project can be found on either my Github repo or this Glitch.Let me know if you have any interesting ideas to apply clustering or any advanced analytical methods using Nebula.js.~Dipankar
...View More
In a new chapter of our collaboration with the Fortune Magazine I'm proud to introduce today the 2021 Global 500. In this year's Global 500 app we are aiming to throw some light over the impact of COVID-19 on the list.We analyzed how two of the list’s main indicators evolved over the years. In 2021 Revenue saw a 5% decline while Profits plummeted some 20% to $1.65 trillion. It was the biggest decline since a 48% plunge in 2009.But not all the regions performed in the same way, to visualize the impact of the economic crisis that followed the COVID-19 pandemic we plotted the contribution of different regions to the Global 500 list using two measures, number of companies and revenue.The chart above shows how each region have performed since the first Global 500 list was published back in 1990. When looking at each evolution line, the meteoric rise of China over the years it's very impressive while Europe’s dominant position on the list on decline.In 2021 the European countries were heavily impacted by the economic shutdown. Europe's representation in the list dropped in number of companies and their reported revenues decreased as well. Other regions like China continued to thrive despite of the current world situation. The US position remained almost unchanged when compared with the previous year 2020.To focus even more in the 2020-2021 change readers can check the next section, where a world map holds indicators that helps us to visualize the change. The image below shows 2020-2021 change measured in number of companies in the Global 500 list. Again, Europe dominate the losses while both US and specially China increase the number of companies with presence in the list.We putted an end to the app analyzing how each sector behaved last year. The chart below shows the size of the entire economy in 2020 and in 2021 (Profits). The entire Global 500 list profits dropped by 20% year over year, but not every sector was impacted equally. Technology, Telecommunications and Retailing increased their profits while the Energy sector took a massive hit on profits losing 1.7 trillion USD disappearing from the top sectors measured by profits. We really hope you enjoy the experience, don’t forget to check the live app here: https://qlik.fortune.com/global500Enjoy it! 😊
...View More
Picasso.js has been there for a while since its first release in 2018. It is an open-source charting library that is designed for building custom, interactive, component-based powerful visualizations.Now, what separates Picasso from the other available charting libraries?Apart from the fact that Picasso.js is open-sourced, here is my take on certain other factors -Component-based visuals:A visualization usually comprises various building blocks or components that form the overall chart.For example, a Scatter plot consists of two axes with one variable on each of the axes. The data is displayed as a point that shows the position on the two axes(horizontal & vertical). A third variable can also be displayed on the points if they are coded using color, shape, or size. What if instead of an individual point you wanted to draw a pie chart that presents some more information? Something like this -As we can see on the right-side image, a correlation between Sales and Profit is projected. However, instead of each point, we have individual pie charts that show the category-wise sales made in each city. This was developed usingD3.js- a library widely used to do raw visualizations using SVGs.Picasso.js provides a similar level of flexibility when it comes to building customized charts. Due to its component-based nature, you can practically build anything by combining various blocks of components.Interactive visuals:Combiningbrushingandlinkingis key when it comes to interactivity between various visual components used in a dashboard or web application. Typically what it means is if there are any changes to the representation in one visualization, it will impact the others as well if they deal with the same data (analogous toAssociationsin Qlik Sense world). This is crucial in modern-day visual analytics solutions and helps overcome the shortcomings of singular representations.Picasso.js provides these capabilities out of the box. Here is an example of how you could brush & link two charts built using Picasso:const scatter = picasso.chart(/* */);
const bars = picasso.chart(/* */); scatter.brush('select').link(bars.brush('highlight'));Extensibility:What if you wanted to create visualizations with a set of custom themes that aligns with your organization? What if you needed to bind events using a third-party plugin like Hammer.js? Most importantly for Qlik Sense users, how do you bring the power ofassociationsto these custom charts? Picasso.js allows users to harness these capabilities easily.D3-style programming:Picasso.js leverages D3.js for a lot of its features and this allows the D3 community to reuse and easily blend D3-based charts into the Picasso world. Having come from a D3.js background, I realized how comfortable it was for me to scale up when developing charts using Picasso since the style of programming(specifically building components) was very common.If you would like to read more about the various concepts & components of Picasso, please follow the officialdocumentation.Now that we know a bit more about Picasso.js, let us try to build a custom chart and try to integrate it with Qlik Sense’s ecosystem, i.e. use selections on a Qlik Sense chart and apply it to the Picasso chart as well.Prerequisite: picasso-plugin-qIn order to interact with and use the data from Qlik’s engine in a Picasso-based chart, you will need to use theqplugin. This plugin registers aqdataset type making data extraction easier from a hypercube.Step 1:Install, import the required libraries for Picasso and q-plugin and register -npm install picasso.jsimport picassojs from 'picasso.js';
import picassoQ from 'picasso-plugin-q';
picasso.use(picassoQ); // registerStep 2:Create hypercube and access data from QIX - const properties = {
qInfo: {
qType: "my-stacked-hypercube"
},
qHyperCubeDef: {
qDimensions: [
{
qDef: { qFieldDefs: ["Sport"] },
}
],
qMeasures: [
{ qDef: { qDef: "Avg(Height)" }
},
{ qDef: { qDef: "Avg(Weight)" }
}
],
qInitialDataFetch: [{ qTop: 0, qLeft: 0, qWidth: 100, qHeight: 100 }]
}
};Our idea is to build a scatter plot to understand theheight-weightcorrelation of athletes from an Olympic dataset. We will use the dimension‘Sport’to color the points. Therefore, we retrieve the dimension and 2 measures(Height, Weight) from the hypercube.Step 3:Getting the layout and updating -Once we create the hypercube, we can use thegetLayout( )method to extract the properties and use it to build and update our chart. For this purpose, we will create two functions and pass the layout accordingly like below.const variableListModel = await app
.createSessionObject(properties)
.then(model => model);
variableListModel.getLayout().then(layout => {
createChart(layout);
});
variableListModel.on('changed',async()=>{
variableListModel.getLayout().then(newlayout => {
updateChart(newlayout);
});
});First, we pass the layout to thecreateChart( )method, which is where we build our Scatter plot. If there are any changes to the data, we call theupdateChart( )method and pass the newLayout so our chart can reflect the updated changes.Step 4:Build the visualization using Picasso.js -We need to let Picasso know that the data type we will be using is from QIX, i.e.qand then pass the layout like below:function createChart(layout){
chart = picasso.chart({
element: document.querySelector('.object_new'),
data: [{
type: 'q',
key: 'qHyperCube',
data: layout.qHyperCube,
}],
}Similar to D3, we will now define the two scales and bind the data (dimension & measure) extracted from Qlik Sense like this: scales: {
s: {
data: { field: 'qMeasureInfo/0' },
expand: 0.2,
invert: true,
},
m: {
data: { field: 'qMeasureInfo/1' },
expand: 0.2,
},
col: {
data: { extract: { field: 'qDimensionInfo/0' } },
type: 'color',
},
},Here, the scalesrepresents the y-axis andmrepresents x-axis. In our case, we will have the height on the y-axis and weight on the x-axis. The dimension, ‘sports’ will be used to color as mentioned before.Now, since we are developing a scatter plot, we will define apointcomponent inside the component section, to render the points. key: 'point',
type: 'point',
data: {
extract: {
field: 'qDimensionInfo/0',
props: {
y: { field: 'qMeasureInfo/0' },
x: { field: 'qMeasureInfo/1' },
},
},
},We also pass thesettingsof the chart inside the component along with thepointlike this:settings: {
x: { scale: 'm' },
y: { scale: 's' },
shape: 'rect',
size: 0.2,
strokeWidth: 2,
stroke: '#fff',
opacity: 0.8,
fill: { scale: 'col' },
},Please note that I have used the shape‘rect’instead of circle here in this visualization as I would like to represent each point as a rectangle. This is just an example of simple customization you can achieve using Picasso.Finally, we define theupdateChart( )method to take care of the updated layout from Qlik. To do so, we use theupdate( )function provided by Picasso. function updateChart(newlayout){
chart.update({
data: [{
type: 'q',
key: 'qHyperCube',
data: newlayout.qHyperCube,
}],
});
}The result is seen below:Step 5: Interaction with Qlik objects-Our last step is to see if the interactions work as we would expect with a native Qlik Sense object. To clearly depict this scenario, I useNebula.js(a library to embed Qlik objects) to call & render a predefined bar chart from my Qlik Sense environment. If you would like to read more on how to do that please refer tothis. Here’s a sample code. n.render({
element: document.querySelector(".object"),
id: "GMjDu"
})And the output is seen below. It is a bar chart that shows country wise total medals won in Olympics.So, now in our application, we have a predefined Qlik Sense bar chart and a customized scatter plot made using Picasso.js. Let’s see their interactivity in action.The complete code for this project can be found on myGitHub.This brings us to an end of this tutorial. If you would like to play around, here are a fewcollectionof Glitches for Picasso. You can also refer totheseset of awesome examples in Observable.
...View More
In this article, we will explore creating a slope chart extension using Nebula.js and the Picasso.js charting library. We will walk through the process, explain the different Picasso components that make up the chart, and introduce a few concepts along the way including tooltips and brushing.Documentation for both libraries can be found here:https://qlik.dev/libraries-and-tools/nebulajshttps://qlik.dev/libraries-and-tools/picassojsThe slope chart we're about to create was featured on the 2021 Fortune 500 app. It visually explains how sectors have been impacted by the COVID-19 pandemic by ranking the sectors of the Fortune 500 list and showing their increase or decrease between 2020 and 2021.Connecting to the Qlik Sense appFirst things first, let's connect to our QS app using Enigma.js. We create a QIX session using "Enigma.create" then use the "Session.open" function to establish the websocket connection and get access to the Global instance. We use the "openDoc" method within the global context to make the app ready for interaction.// qlikApp.js
const enigma = require('enigma.js');
const schema = require('enigma.js/schemas/12.170.2.json');
const SenseUtilities = require('enigma.js/sense-utilities');
const config = {
host: '<HOST URL>',
appId: '<APP ID>',
};
const url = SenseUtilities.buildUrl(config);
const session = enigma.create({ schema, url });
session.on('closed', () => {
console.error('Qlik Sense Session ended!');
const timeoutMessage = 'Due to inactivity, the story has been paused. Refresh to continue.';
alert(timeoutMessage);
});
export default session.open().then((global) => global.openDoc(config.appId));Configuring NebulaNow that we have successfully connected to the QS app, let's move on to configuring Nebula.js. In this step, we use the "embed" method to initiate a new Embed instance using the enigma app. We then register the chart extension named "slope" (the actual creation of this extension is covered further down).// nebula.js
import { embed } from '@nebula.js/stardust';
import qlikAppPromise from 'config/qlikApp';
import slope from './fortune-slope-sn';
export default new Promise((resolve) => {
(async () => {
const qlikApp = await qlikAppPromise;
const nebula = embed(qlikApp, {
types: [{
name: 'slope',
load: () => Promise.resolve(slope),
}],
});
resolve(nebula);
})();
});Rendering the chartIn our "Slope" react component, we proceed to render the visualization into the DOM on the fly. We use the "render" method and pass configuration options that include a reference to the HTML element, the type (we named it "slope" in the previous step), and the array of fields. In this case, we use 2 dimensions (Year, Sector) and 2 measures (Set Analysis that returns the ranking by sector profits as well as the actual profit numbers for the two years we're interested in).import React, { useRef, useEffect } from 'react';
import useNebula from 'hooks/useNebula';
const Slope = () => {
const elementRef = useRef();
const chartRef = useRef();
const nebula = useNebula();
useEffect(async () => {
if (!nebula) return;
chartRef.current = await nebula.render({
element: elementRef.current,
type: 'slope',
fields: [
'[Issue Published Year]',
'[Sector2]',
'=Rank(Sum({$<[Issue Published Year]={2020, 2021}>} [Inflation Adjusted Sector Profit]))',
'=Sum({$<[Issue Published Year]={2020, 2021}>} [Inflation Adjusted Sector Profit])',
],
});
}, [nebula]);
return (
<div>
<div id="slopeViz" ref={elementRef} style={{ height: 600, width: 800 }} />
</div>
);
};
export default Slope;The slope chart extensionThis is where the magic happens! Let's explore different sections of the file and go through them (the full project can be found at the end of the article).In the following code snippet, we make use of the q plugin that makes it easier to extract data from a QIX hypercube (or alternatively a list object). Notice the values of the initial fetch and the min and max properties of the dimensions and measures, these should match the number of fields we previously set in our Slope react component.export default function supernova() {
const picasso = picassojs();
picasso.use(picassoQ);
return {
qae: {
properties: {
qHyperCubeDef: {
qDimensions: [],
qMeasures: [],
qInitialDataFetch: [{ qWidth: 4, qHeight: 2500 }],
qSuppressZero: false,
qSuppressMissing: true,
},
showTitles: true,
title: '',
subtitle: '',
footnote: '',
},
data: {
targets: [
{
path: '/qHyperCubeDef',
dimensions: {
min: 1,
max: 2,
},
measures: {
min: 1,
max: 2,
},
},
],
},
},
...ScalesOur x scale is related to the year field, the color scale represents our second dimension - sectors, and lastly the y and y-end scales use custom "ticks" values because we would like to show labels in the format "rank # - sector".Both "yaxisVals" and "yaxisendVals" arrays have been constructed by manipulating the data extracted from the layout object (see lines 76 to 92 of the slope-sn.js file).You can learn more about scales and the different types of scales that the Picasso library offers here.scales: {
x: {
data: {
extract: {
field: 'qDimensionInfo/0',
},
},
paddingInner: 0.8,
paddingOuter: 0,
},
color: {
data: {
extract: {
field: 'qDimensionInfo/1',
},
},
range: ['#5D627E'],
type: 'color',
},
y: {
data: {
field: 'qMeasureInfo/0',
},
invert: false,
expand: 0.03,
type: 'linear',
ticks: { values: yaxisVals },
},
yend: {
data: {
field: 'qMeasureInfo/0',
},
invert: false,
expand: 0.03,
type: 'linear',
ticks: { values: yaxisendVals },
},
},
...ComponentsThe components that make up the chart are:Type "axis" - notice that we have two y-axes that use two different scales covered aboveType "lines" - notice that we're using the series prop that represents sectorsType "point" - this represents the circles at the edges of the slope lines, notice that we're extracting some additional data here since we're gonna be using it for the tooltip component.Type "tooltip" - there are three aspects to rendering tooltips:Interaction to bind events to the chart. We use 'mousemove' and 'mouseleave' to show or hide the tooltip.Extracting the relevant data from the hovered node, in this case we're filtering to look for nodes with key 'point', then we manipulate this data to return an object containing the values we will be displayingGenerating content using the 'content' setting to format the information from the object we previously constructed and generate virtual nodes using the HyperScript API.components: [
{
type: 'axis',
key: 'x-axis',
scale: 'x',
dock: 'bottom',
settings: {
labels: {
show: true,
fontSize: '10px',
mode: 'horizontal',
},
},
},
{
type: 'axis',
key: 'y-axis',
scale: 'y',
settings: {
labels: {
show: true,
mode: 'layered',
fontSize: '10px',
filterOverlapping: false,
},
},
layout: {
show: true,
dock: 'left',
minimumLayoutMode: 'S',
},
},
{
type: 'axis',
key: 'y-axis-end',
scale: 'yend',
settings: {
labels: {
show: true,
mode: 'layered',
fontSize: '10px',
filterOverlapping: false,
},
},
layout: {
show: true,
dock: 'right',
},
},
{
type: 'line',
key: 'lines',
data: {
extract: {
field: 'qDimensionInfo/0',
props: {
y: {
field: 'qMeasureInfo/0',
},
series: {
field: 'qDimensionInfo/1',
},
},
},
},
settings: {
coordinates: {
major: {
scale: 'x',
},
minor: {
scale: 'y',
ref: 'y',
},
minor0: {
scale: 'y',
},
layerId: {
ref: 'series',
},
},
orientation: 'horizontal',
layers: {
sort: (a, b) => a.id - b.id,
curve: 'monotone',
line: {
stroke: {
scale: 'color',
ref: 'series',
},
strokeWidth: 2,
opacity: 0.8,
},
},
},
brush: {
consume: [{
context: 'increase',
style: {
active: {
stroke: '#53A4B1',
opacity: 1,
},
inactive: {
stroke: '#BEBEBE',
opacity: 0.45,
},
},
},
{
context: 'decrease',
style: {
active: {
stroke: '#A7374E',
opacity: 1,
},
inactive: {
stroke: '#BEBEBE',
opacity: 0.45,
},
},
}],
},
},
{
type: 'point',
key: 'point',
displayOrder: 1,
data: {
extract: {
field: 'qDimensionInfo/0',
props: {
x: {
field: 'qDimensionInfo/0',
},
y: {
field: 'qMeasureInfo/0',
},
ind: {
field: 'qDimensionInfo/1',
},
rank: {
field: 'qMeasureInfo/0',
},
rev: {
field: 'qMeasureInfo/1',
},
},
},
},
settings: {
x: { scale: 'x' },
y: { scale: 'y' },
shape: 'circle',
size: 0.2,
strokeWidth: 2,
stroke: '#5D627E',
fill: '#5D627E',
opacity: 0.8,
},
brush: {
consume: [{
context: 'increase',
style: {
active: {
fill: '#53A4B1',
stroke: '#53A4B1',
opacity: 1,
},
inactive: {
fill: '#BEBEBE',
stroke: '#BEBEBE',
opacity: 0.45,
},
},
},
{
context: 'decrease',
style: {
active: {
fill: '#A7374E',
stroke: '#A7374E',
opacity: 1,
},
inactive: {
fill: '#BEBEBE',
stroke: '#BEBEBE',
opacity: 0.45,
},
},
}],
},
},
{
key: 'tooltip',
type: 'tooltip',
displayOrder: 10,
settings: {
// Target point marker
filter: (nodes) => nodes.filter((node) => node.key === 'point' && node.type === 'circle'),
// Extract data
extract: ({ node, resources }) => {
const obj = {};
obj.year = node.data.x.label;
obj.industry = node.data.ind.label;
obj.rank = node.data.rank.value;
obj.rankchange = rankChange[obj.industry];
obj.profitsChange = profitsChange[obj.industry];
obj.profits = resources.formatter({ type: 'd3-number', format: '.3s' })(node.data.rev.value);
return obj;
},
// Generate tooltip content
content: ({ h, data }) => {
const els = [];
let elarrow = null;
let rankCh = '';
data.forEach((node) => {
// Title
const elh = h('td', {
colspan: '3',
style: { fontWeight: 'bold', 'text-align': 'left', padding: '0 5px' },
}, `${node.year} ${node.industry}`);
const el1 = h('td', { style: { padding: '0 5px' } }, 'Rank');
const el2 = h('td', { style: { padding: '0 5px' } }, `#${node.rank}`);
// Rank Change
if (node.rankchange > 0 && node.year !== '2020') {
rankCh = `+${node.rankchange}`;
elarrow = h('div', {
style: {
width: '0px', height: '0px', 'border-left': '5px solid transparent', 'border-right': '5px solid transparent', 'border-bottom': '5px solid #008000',
},
}, '');
} else if (node.rankchange < 0 && node.year !== '2020') {
rankCh = node.rankchange;
elarrow = h('div', {
style: {
width: '0px', height: '0px', 'border-left': '5px solid transparent', 'border-right': '5px solid transparent', 'border-top': '5px solid #FF0000',
},
}, '');
} else {
rankCh = '';
elarrow = '';
}
// Rest of Info
const el3 = h('td', {
style: {
display: 'flex',
alignItems: 'center',
},
}, [rankCh, elarrow]);
const elr1 = h('tr', {}, [el1, el2, el3]);
const elr2 = h('tr', {}, [h('td', { style: { padding: '0 5px' } }, 'Profits:'), h('td', { style: { padding: '0 5px' } }, node.profits.replace(/G/, 'B')), h('td', {}, (node.year !== '2020') ? `${numeral(node.profitsChange).format('+0a').toUpperCase()}` : '')]);
els.push(h('tr', {}, [elh]), elr1, elr2);
});
return h('table', {}, els);
},
placement: {
type: 'pointer',
area: 'target',
dock: 'auto',
},
},
},
],
interactions: [
{
type: 'native',
events: {
mousemove(e) {
this.chart.component('tooltip').emit('show', e);
},
mouseleave() {
this.chart.component('tooltip').emit('hide');
},
},
},
],BrushingIn the code above, you will notice 'brush' settings on both the "lines" and "point" type components. We observe changes of a particular brush context (in this case we have two contexts, one named "increase" to show increasing lines and one for "decrease" to show lines that represent sectors that have fallen in ranks).The active and inactive properties contain styles to be applied to the component when it is brushed.In our scenario, we want to programmatically control these brushes from our Slope react component through a toggle button. Let's modify the Slope.jsx file to reflect that.Notice that we are accessing the "increase" and "decrease" brushes through the global window object containing the Picasso chart instance (we assign this on line 456 of slope-sn.js). We then use a combination of the "start", "clear", "end", and "addValues" methods to react to our "toggleBrush" state changes when one of the buttons is clicked.import React, { useRef, useEffect, useState } from 'react';
import useNebula from 'hooks/useNebula';
import Button from '@material-ui/core/Button';
const Slope = () => {
const elementRef = useRef();
const chartRef = useRef();
const nebula = useNebula();
const [toggleBrush, setToggleBrush] = useState(false);
const increaseValues = [11, 16, 17, 5, 8, 12];
const decreaseValues = [4, 6, 19];
useEffect(async () => {
if (!nebula) return;
chartRef.current = await nebula.render({
element: elementRef.current,
type: 'slope',
fields: [
'[Issue Published Year]',
'[Sector2]',
'=Rank(Sum({$<[Issue Published Year]={2020, 2021}>} [Inflation Adjusted Sector Profit]))',
'=Sum({$<[Issue Published Year]={2020, 2021}>} [Inflation Adjusted Sector Profit])',
],
});
}, [nebula]);
useEffect(() => {
if (!nebula || !window.slopeInstance) return;
const highlighterIncrease = window.slopeInstance.brush('increase');
const highlighterDecrease = window.slopeInstance.brush('decrease');
highlighterIncrease.start();
highlighterIncrease.clear();
highlighterDecrease.start();
highlighterDecrease.clear();
if (toggleBrush) {
highlighterIncrease.addValues(increaseValues.map((val) => ({ key: 'qHyperCube/qDimensionInfo/1', value: val })));
} else {
highlighterDecrease.addValues(decreaseValues.map((val) => ({ key: 'qHyperCube/qDimensionInfo/1', value: val })));
}
}, [toggleBrush]);
const handleClearBrushes = () => {
if (!nebula || !window.slopeInstance) return;
const highlighterIncrease = window.slopeInstance.brush('increase');
const highlighterDecrease = window.slopeInstance.brush('decrease');
highlighterIncrease.clear();
highlighterIncrease.end();
highlighterDecrease.clear();
highlighterDecrease.end();
};
return (
<div>
<div id="slopeViz" ref={elementRef} style={{ height: 600, width: 800 }} />
<Button onClick={() => setToggleBrush(!toggleBrush)}>{toggleBrush ? 'Highlight Decrease' : 'Highlight Increasae'}</Button>
<Button onClick={() => handleClearBrushes()}>Clear Brushes</Button>
</div>
);
};
export default Slope;You can check out the full project code on Github.Don't forget to take a look at this year's Fortune 500 and Global 500 apps that feature this chart as well as other custom ones all made possible with Nebula.js and Picasso.js!
...View More
When developing a Qlik Sense app, there will come a time where you are going to want to define your own colors for certain values in a visualization. Did you know that Qlik Sense offers many ways to achieve your desired outcome?
Welcome to the 2nd part of developing a Visual Text Analytics app using Qlik’s open-sourced solutions and a Word embedding technique(Word2Vec). In ourprevious tutorial, we designed a simple architecture(seen below) for the application that we will learn to develop today.Now, let us try to understand the need for each of these components and their role in our app.Front-end: This is the UI of the app that will help the user interact and derive insights.Back-end: Consists of 2 sub-components.Client-side - This is where we have the Qlik’s visualization libraries Nebula and Picasso.js.Server-side - develop the APIs here.CLIENT-SIDE:So, why do we usetwo charting librariesfrom Qlik? Let’s break it down.For me, when I develop a full-stack solution, I think one of the things that I look into is how to build things quickly and efficiently. Since I have a lot of other components to develop or work with, I want to make sure I don’t end up devoting a significant amount of time to building things from scratch.Nebula.jshelps me in this case. It allows me to quickly embed a chart that has already been developed in a Qlik Sense app and use it in my way. All I have to do is to render it in my Visual Analytics app with something like this -nuked.render({
element: document.querySelector(".object"),
id: "XHRqzeG"
})The second visualization library that I leverage here isPicasso.js. Picasso enables me to build custom, interactive, component-based visualizations. One of the things that I was looking for with this specific solution was to process textual data, specifically do the word embeddings and return the result of the word embedding to a chart so it helps me in presenting the data visually (note that we are developing a Visual Analytics app).This is where Picasso.js fits in. It has a similar way of working as D3.js and allows me to work with 2D matrix and array of Objects. I can also use the data as I want in various components of the chart, making it very flexible. Here’s a snippet of how I used my transformed data in a Bar chart..then(response => response.json())
.then(data => {
var js_data = [data];
picasso.chart({
element: document.querySelector(".container"),
data: [{
type: "matrix",
data: data
}]
})
});Great! So, the gist is -Nebula.js - embed already developed Qlik Sense charts (it is quick and easy). Also allows for selections & Qlik specific features.Picasso.js - develop a customized chart (use data as we would like to build various chart components)SERVER-SIDE:The major chunk of our backend is the Server-side component where we develop our APIs. We use theExpress.jsframework here that helps us to manage routes, requests, etc.What specific APIs do we have in this app?/wordembed : This is the API to perform word embedding using Word2Vec. In this case, we take advantage of the NPM package(https://www.npmjs.com/package/word2vec) which provides a Node.js interface to Google’s Word2Vec implementation. We will be sending the results of the embedding to a Picasso Bar chart./data : Read data processed from Python’s implementation of Principal Component Analysis(PCA) and send it back to a Picasso Scatter plot to visualize the principal components.Alright! So, we have everything that we need component-wise. Now, let’s quickly understand two things and their need in this solution -Word Embedding - Word2VecPrincipal Component Analysis(PCA)This is where theMachine Learningpart comes to play and is key to developing a Visual Text Analytics app like this one.Since this tutorial is not focused on the implementation of word embeddings/Word2Vec but rather touches upon it from more of an application perspective, we will not delve into details. Simply put, word embedding captures the essence of a word, i.e. their meanings, context, and semantic relationships and converts them into numerical representation (a vector).For e.g. the word ‘sativa’ can be represented by something like this :sativa -0.441052 -0.247968 0.463302 0.086262. Please note that the vectors are generally very high-dimensional (in our case we have 300 dimensions).So, how do we get these vectors?To derive the vectors, we use the word2vec function like below whereconst w2v = require("word2vec");
w2v.word2vec("cleared_word_embedding.txt", "vectors.txt",
{ size: 300 }, () => {
console.log("generated");
}
);These vector representations can then be applied to some interesting use-cases. One of the key tasks that we do by using the vectors in this project is to calculate similarities between words (commonly calculated usingcosine_similarity). So, in the front-end, we allow users to input any word of their choice and they will be visually presented with a chart representing the most similar words. Something like this:This is extremely beneficial for performing text-based analysis. For example, if a user searches for the word ‘citrus’, our Visual Analytics app will present something like this:Here we can see that the context of the word is maintained by the word embedding model and the user is returned with the top 5 most similar words(in descending order) - which are flavors again. If the user wants to then continue their analysis with any other flavor, they can start with the relevant & similar ones. Our API looks like below:app.post("/wordembed", (req, res)=>{
var val = req.body.hi
const w2v = require("word2vec");
w2v.loadModel("vectors.txt", (error, model) => {
var sim = model.mostSimilar(val, 5)
res.send(sim);
});
})The second part is thePrincipal Component Analysis(PCA) part. PCA is a technique used to reduce the dimensionality of a high-dimensional dataset (like text, images). As high-dimensional data is very difficult to analyze and visualize, an ideal choice would be to reduce the dimensions by preserving as much information as possible.Right, but why do we use it in this project?I wanted to allow users to visualize the words in our vocabulary in 2-dimension so they can explore similarities between them effectively. The best way was to present this information in a Scatter plot. For this specific project, I used thesklearn’s python implementation of PCAand imported the coordinates in my /data API like below:app.get("/data",(req, res)=>{
const path = require('path');
const csv = require('fast-csv');
const data = []
fs.createReadStream(path.resolve(__dirname, '../pca_words.csv'))
.pipe(csv.parse({ headers: true }))
.on('error', (error) => console.error(error))
.on('data', (row) =>
data.push(row)
)
.on('end', () => {
res.send(data);
})
})Here is the visualization for the PCA projection.As you can see, words such as ‘depression’, ‘appetite’, ‘relief’ are in close proximity since they are similar. Logically that makes sense as well since these are a couple of things that can be treated using the strains.Here is the application in action:This brings us to the end of this tutorial on developing a Visual Text Analytics app using Qlik’s open-sourced solutions and Machine Learning techniques such as Word embeddings.Want to get started building such an app? Here is a Glitch for developers to remix.PS: you will not be able to see the visualizations when you open the Glitch due to authentication reasons. This code is expected to serve as a boilerplate for developing visual text analytics app using Qlik OSS.
...View More
Qlik Full-stack? APIs? Machine Learning?If these topics sounds interesting to you, this series(Part-1 & 2) might be your starting point.Today, we are going to talk about one particular area within Visual Analytics i.e., Visual Text Analytics. This tutorial will focus on the nitty-gritty of this area of research and in my next post, I will do a step-by-step tutorial of how you can actually develop the application.VISUAL TEXT ANALYTICS:With the surge in the generation of digital text on the web in the form of product reviews, descriptions, feedback, etc., there has been a demand for leveragingtext miningtechniques to understand and analyze these unstructured data. Typically organizations would like to be able to identify patterns, specific keywords(that make an impact), similarities, etc. through text mining. However, the challenge in analyzing hidden patterns from a large noisy text corpora can be huge and at times daunting for analysts. To mitigate the challenge in the discussion, this research area aims to bring text mining, text visualization, and Human-Computer interaction together to make sense of the data.SOLUTION:In the past, I have built a couple of Visual Text Analytics applications using technology stack such as - D3.js, Plotly/Dash, Python Flask(for APIs), etc., and thought it might be interesting to try developing an app using Qlik Sense’sopen-sourced solutions. Primarily, for this blog, we will be looking at two of Qlik’s frameworks -Nebula.jsandPicasso.js. If you are not aware of them, here is a quick gist:So, what will be building? My idea is to build an Exploratory visual analytics app to discover insights from a Cannabis dataset. This will be a full-stack application to analyze the various components such as ‘Effects’, ‘Flavors’, ‘Type of cannabis strains’ and ‘Description’. In this particular dataset, the ‘Description’ field is textual and contains a particular strain’s summary. So, this field will be our focus for the textual analytics part. Below is an example of the ‘Description’ field:Strain(A-10): A-10 has an earthy, hashy taste that provides a very heavy body stone. frequently used to treat insomnia and chronic pain.To start developing the application, I have designed a high-level architecture to portray the various components involved in building the app. Hopefully, this will give a better picture for our next steps.We will understand each of these components in details in our next tutorial and see them in action as we finish developing the app.
...View More