Invest in less expensive hardware and solve multi-layered, Lambda architecture redundancy by replaying data instead of maintaining two code bases (batch and speed layers) to process unique events continuously in real-time while meeting standard quality of service.
The Kappa architecture solves the redundant part of the Lambda architecture. It is designed with the idea of replaying data. Kappa architecture avoids maintaining two different code bases for the batch and speed layers. The key idea is to handle real-time data processing, and continuous data reprocessing using a single stream processing engine and avoid a multi-layered Lambda architecture while meeting the standard quality of service. The Kappa architecture is used with less expensive hardware to process unique events occurring continuously in real-time.
The Lambda architecture is used to reliably update the data lake as well as efficiently train machine learning models to predict upcoming events accurately. The architecture comprises a Batch Layer, Speed Layer (also known as the Stream layer), and Serving Layer. The batch layer operates on the complete data and thus allows the system to produce the most accurate results. However, the results come at the cost of high latency due to high computation time. The speed layer generates results in a low latency, near real-time fashion. The speed layer is used to compute the real-time views to complement the batch views. The Serving layer enables various queries of the results sent from the batch and speed layers.
Unify both data lake and data warehouse automation in one user interface to plan and execute either with ease.
The separation of storage and compute allows for each to be scaled up or down independently, blurring the lines between traditional data warehouses and data lakes. The separation also enables companies to architect a multi-modal lakehouse platform, which provides a single source of truth for all analytic initiatives – AI, BI, machine learning, streaming analytics, data science, and more. Qlik Compose facilitates both data lake and data warehouse automation in one unified user interface, enabling you to plan and execute either project with ease.
Realize faster return on data lake investments while confidently meeting growing demands for analytics-ready data sets in real time.
Qlik Data Integration (QDI) for Data Lake Creation helps enterprises realize a faster return on their data lake investment by continuously providing accurate, timely, and trusted transactional data sets for business analytics. Unlike other solutions, QDI for Data Lakes automates the entire data pipeline from real-time data ingestion to the creation and provisioning of analytics-ready datasets, eliminating the need for manual scripting. Data engineers can now meet growing demands for analytics-ready data sets in real-time with confidence.
Meet or exceed the demands for analytics-ready data marts that enable data-driven insights at the speed of change.
Qlik Data Integration (QDI) delivers on the promise of agile data warehousing with automation that allows users to quickly design, build, deploy, manage and catalog purpose-built data warehouses (especially cloud-based) faster than traditional solutions. Consequently, data engineers can meet or exceed the demands for analytics-ready data marts that enable data-driven insights at the speed of change.
Today I am going to blog about inner and outer set expressions. If you have ever used set analysis in your measure expressions, then you will like this new capability. Set analysis is a very powerful feature often used to define a scope that may differ from the scope that is defined by making selections in an app. For example, in the set expression below the sales are summed where the product line is camping equipment. This is considered an inner set expression and probably familiar to those who use set analysis. The set expression is in the aggregate function which is sum in this case.If this expression was written as an outer set expression, the set expression would be outside of the aggregate function as seen below. When using an outer set expression, it must be before the scope. In this example, both the inner and outer expressions return the same result.Now, where the outer set expression is helpful is when you have more than one aggregate function in your expression. For example, in the inner set expression below, there are three sum aggregate functions and in each one, set analysis is being used to set the scope to camping equipment.Using an outer set expression, this expression can be written like this:Notice that the set expression sits outside of the expression and at the beginning of the scope. Written this way, [Product Line]={'Camping Equipment'} is applied to all the aggregate functions. This is a cleaner way to write the expression and ensures that it is applied to all the aggregate functions. The outer set expression can also be used with a master measure. Assume I have master measures named Sales and Margin %. I can use outer set expressions like the ones below.A set expression, like the outer set expressions above, are applied to the entire expression. If the set expressions were in brackets, then the set expression applies only to the aggregate functions within the brackets. For example, the set expression below is in parentheses which means that it only applies to the aggregate functions within the parentheses and not to the aggregate function that sits outside of the parentheses. Written this way, the resulting value will differ from the set expression without any brackets/parentheses.A few things to remember about set expressions. Inner set expressions have precedence over outer set expressions and if the inner set expression has a set identifier, it replaces the context. Otherwise, the inner set expression is merged with the outer set expression. Check out Qlik Help for more examples and rules around inner and outer set expressions and try it for yourself in your next app.Thanks,Jennell
...View More
On Part 1 of this blog post, we went through Generic Objects, learned about definitions of the ListObject and Hypercube structures, and explored some of the settings that they offer in order to interact with data when communicating with the Qlik Associative Engine through Enigma.js.In this second part, we will see actual implementations of ListObjects and Hypercubes and learn how they can be used as part of your next web application to create filters and charts.Creating Filters with ListObjectsFirst, let’s create a filter that corresponds to a single field in our data model that we can use to make selections and filter in.The ListObject structure is best suited in this case since it contains one dimension. It lists all the values in a single field and provides metadata about the current state of each field value (either selected, excluded, or possible)In order to create a ListObject, we create a dynamic property for it in a generic object, we then add the appropriate JSON definition for a list object via the “qListObjectDef” property. The engine will know how to properly parse this definiton in order to produce a ListObject.In our case, we define a list object for our “Region” field by using the dimension definition based on the field name via the “qDef/qFieldDefs” property.All is left if to fetch the data, we do that by defining the “qInitialDataFetch” property to grab the initial data set. In our case, we have 1 column and we know that the number of rows to be pulled is less than 10. So, we define it with “qWidth” 1 and “qHeight” 10.{
"qInfo": {
"qType": "filter"
},
"qListObjectDef": {
"qDef": {
"qFieldDefs": ["Region"]
},
"qInitialDataFetch": [
{
"qLeft": 0,
"qWidth": 1,
"qTop": 0,
"qHeight": 10
}
]
}
}After connecting to enigma and getting our app object, we create a session object and pass it the ListObject definition above. A session object is a generic object that is only active for the current session and is not persisted in the model.const regionObj = await enigmaApp.createSessionObject(regionListDef);
const regionLayout = await regionObj.getLayout();
renderFilter(regionListElem, regionLayout, regionObj)After getting the ListObject layout, we call the function below that takes care of retrieving the data we want to display on our filter via the “layout.qListObject.qDataPages[0].qMatrix” which consists of an array of arrays, each corresponding to 1 row of data.The JSON object we get by looping through the "qMatrix" includes the following properties:qText: a text representation of the cell valueqNum: a numeric representation of the cell valueqElemNumber: a rank number of the cell value.qState: the selection state of the field value.We use both qText and qState in our front end. First to display the value name, and to add a CSS class that will allows to differentiate between different states:S for selectedX for excludedO for possibleWe also listen to click events on the list and call “genericObject.selectListObjectValues("/qListObjectDef", [e[0].qElemNumber], true)” which is a Generic Object method. It takes the path that describes where our ListObject is defined in our Generic Object as a 1st parameter, and the Element Number that we want to select as the 2nd parameter. The 3rd argument is the toggle mode (whether a selection is added to an already existing set of selections or overrides them).const renderFilter = (element, layout, genericObject) => {
var titleDiv = element.querySelector(".filter-title");
var ul = element.querySelector("ul");
ul.innerHTML = "";
// Get the data from the List Object
var data = layout.qListObject.qDataPages[0].qMatrix;
// Loop through the data and create the filter list
data.forEach(function(e) {
var li = document.createElement("li");
li.innerHTML = e[0].qText;
li.setAttribute("class", e[0].qState);
// Click function to select
li.addEventListener("click", function(evt) {
genericObject.selectListObjectValues("/qListObjectDef", [e[0].qElemNumber], true);
});
ul.appendChild(li);
});
};Creating Charts with HyperCubesWhen creating visualizations, we make use of Hypercubes which allow us to define a combination of both dimensions and measures in order to get a calculated data set.Let’s create a Pie Chart that shows the Sum of Revenues by Region.The Generic Object definition for this includes 1 Dimension and 1 Measure that we define via the “qHyperCubeDef” propertyWe then define the initial data fetch, in this case we need 2 columns (one for the Region, and one for the calculated Revenue) and we don’t expect to have more than 1000 rows. Thus we set “qWidth” 2 and “qHeight” 1000.{
"qInfo": {
"qType": "chart"
},
"qHyperCubeDef": {
"qDimensions": [
{
"qDef": {
"qFieldDefs": ["Region"],
"qSortCriterias": [
{
"qSortByNumeric": 1
}
]
},
"qNullSuppression": true
}
],
"qMeasures": [
{
"qDef":{
"qDef": "=Sum([Sales Quantity]*[Sales Price])"
}
}
],
"qInitialDataFetch": [
{
"qLeft": 0,
"qWidth": 2,
"qTop": 0,
"qHeight": 1000
}
]
}
}Similar to what we have done on the ListObject, we create a Generic Object (session object), and then get its layout. Next we call the “renderChart” method to create the Pie chart visualization.const chartObj = await enigmaApp.createSessionObject(chartDef);
const chartLayout = await chartObj.getLayout();
renderChart(chartLayout);Our function is simple, we start by accessing the qMatrix array which contains all of our rows which in turn contain a group of cells.We refine this array using the map function to only grab a pair of values consisting of the Region (via the qText property of the 1st cell) and the Revenue (via the qNum property of the 2nd cell).You can then render the chart using your visualization tool of choice. In this case, we use C3.js.const renderChart = (layout) => {
var qMatrix = layout.qHyperCube.qDataPages[0].qMatrix;
// Map through qMatrix to format it as array of arrays: [[region1, revenue1], [region2, revenue2] ...]
const columnsArray = qMatrix.map((arr) => [arr[0].qText, arr[1].qNum]);
c3.generate({
bindTo: "#chart",
data: {
columns: columnsArray,
type: 'donut'
},
donut: {
title: "Revenue by Region"
}
});
}I hope this post helps you further understand the notion of Generic Objects in the form of ListObjects and HyperCubes. Let me know how you are leveraging these concepts to build your custom solutions!The full code can be found on my Github Repo.
...View More
Set analysis is one of the more powerful tools you can use in Qlik Sense and QlikView. Its syntax is sometimes perceived as complicated, but once you learn it, you can achieve fantastic things. There is now an additional way of writing the Set expression, that may simplify your code.