As a follow-up to my previous blog post titled Finance Report with Waterfall Chart, I wanted to share an awesome demo that showcases financial reporting visualizations including a profit & loss statement with a waterfall chart.Qlik'sDennis Jaskowiak andEkaterina Kovalenko, and partner Dawid Marciniak from HighCoordination, created the Financial Analysis demo based off of Jedox data, incorporating many enhancements to the straight table and pivot table. In addition to the waterfall chart, they use inline SVG to create lollipop charts and bar charts in financial statements.
Here is a look at some of the sheets:
The Dashboard provides a high-level overview of profit and loss, cash flow and liquidity. On this sheet, view the use of inline SVG bar charts and lollipop charts.
The P&L sheet provides a more detailed look at profit and loss.
The Cost Center sheet uses the pivot table to show sales costs.
Find a detailed look of cash flow on the Cash Flow sheet.
This is just some of the sheets you will find in the Financial Analysis demo. If you are looking for appealing ways to visual your financial data while keeping it concise and clean, download the Financial Analysis demo here.
Thanks,
Jennell
...View More
In this blog post, I will review some data flow processors that can be used to prepare your data in a data flow. Let’s start by quickly reviewing what a data flow is. In Qlik Cloud Analytics, a data flow is a no-code experience that visually allows you to prepare your data with drag and drop capabilities. It is intuitive and easy to use and does not require the user to have scripting experience. Data flow processors, along with sources and targets, are used to build a data flow. Each processor handles a specific data transformation task. Here you will find a full list of the data flow processors available.
This blog will touch base on a few processors to familiarize you with how they work and how easy they are to use. To begin, a data flow must first be created. There is more than one way to do this. From the Qlik Cloud Analytics catalog, click on the + Create new button and select Data flow or navigate to Prepare data from the menu and click on the add Data flow button at the top of the page.
+ Create new menu
+ Create new
Prepare data
MenuData flow
Once you name the new data flow, navigate to the Editor.
On the left, there are sources, processors and targets. The source is the data input, the processors are the data transformation types, and the targets are data outputs. Before we can look at the processors, we need to select our input data from the data catalog or a connection. Once that is in place, we can begin to explore the processor options. There are several data flow processors – too many to review in this blog but I will review three of the them - the Filter processor, the Join processor and the Unpivot processor.
Filter Processor
The filter processor filters data based on a condition. A processor can be added to the data flow canvas by dragging and dropping the processor onto the canvas or by clicking on the menu in the data source and selecting Add processor.
If you drag and drop the processor onto the canvas, you will need to connect the dots between the input and processor. If you add it from your data source menu, the dots will automatically be connected for you.
Each processor has a properties panel where the processor can be configured. In this example, let’s use the filter to select employees who live in the United States. To do this, first select the field to process – Country. There is an option to apply a function but one is not needed in this example.The operator will be equal, and the Value will be United States. Once the properties are entered, click the Apply button to save.
At the bottom of the page, I can preview the script (matching and not matching records) for the filter processor I just applied and see a preview of the data.
From the filter processor menu, there are a few options for my next step as seen below.
Add matching target will add a target to the data flow for the records that match the Country = United States filter. Add non-matching target will add a target to the data flow for the records that do not match the Country = United States filter. Matching and non-matching processors can also be added. For this example, I will add a matching target and in the properties panel, I will select the space, the extension (.qvd, .parquet, .txt or .csv) and the name of the target file. Like the sources, the target can be a data file or connection. Once I click Apply in the properties panel, I will see a message at the top right indicating that my flow is valid and ready to be run. Running the data flow will grab my Employees dataset, filter the data by country and store the results in a QVD named US Employess.
I now have a data file that has been transformed and prepared for use.
Join Processor
Now, let’s look at how we can join two data inputs into one data output. To do this, two data inputs are required. In the example below, ARSummary and ARSummary-1 are the two data inputs.
In the properties panel of the join processor, the join type is selected and the fields that should be used to link the two tables are selected. You can learn more about joins here. Once the target is added, the data flow can be run, and the result will be a single table with the records from the ARSummary table and the associated records from the ARSummary-1 table.
Unpivot Processor
If you are familiar with scripting, the unpivot processor is like a crosstable load. It allows you to rearrange a table so that column data becomes row data. It can transform a table like this:
To this:
Here is an example data flow with the unpivot processor:
In the properties panel of the unpivot processor, there are only a few settings to update. The first is the unpivot fields. Here is where the fields that we want to unpivot are selected. In this example, we want the year to be stored as row level data so we select them all.
The Attribute field name is the name we want to give to the unpivoted fields – in this case Year. The Value field name is the name of the data that is associated with the fields we are unpivoting – in this examples Sales.
After applying the changes and running the data flow, we will have a table transformed based on our specifications without any code.
In this blog post, we touched upon three of the many processors that can be used in a data flow. Note that a data flow can have many sources, processors and targets – it all depends on your needs. The visual interface of a data flow makes it easy to prepare your data without any code in an appealing design that is easy to follow. Try it out!
Thanks,
Jennell
...View More
When building dashboards that involve any kind of leaderboard or competitive comparison, it’s always good to show how rankings shift over time. Not just "who's #1 right now," but the full story: who climbed, who dropped, when the shift happened.
Qlik Sense doesn't have a native bump chart. The usual workaround is a line chart with pre-calculated ranks, but I wanted to push this further and add some more interactivity and options.
The idea for creating this extension came after doing some updates on the Formula1 web app which uses a native D3.js chart, so I wanted to transfer that code over to a Qlik Sense extension that can be reused in other apps.
The extension makes it easy for you. You just add two dimensions and one measure, and it takes care of everything else: ranking, time sorting, labels, hover highlighting, and field selections. The object properties will let you customize it to your needs.
In this post I'll walk through what it does, how it's structured, and how you can use it for various use cases.Github Repo:https://github.com/olim-dev/qlik-bump-chart
What is a Bump Chart?
A bump chart shows how entities change position relative to each other over time. Each entity gets a colored line. Rank #1 sits at the top. As rankings shift, lines cross, and you immediately see who's moving up, who's falling, and when an overtake happens.
What can you do with this extension?
Track rankings over any time dimension (quarters, months, weeks, laps)
Highlight top performers, bottom performers, or biggest movers automatically
Toggle between rank mode (auto-calculated) and position mode (raw measure as Y-axis)
Replay the line-drawing animation
Click labels or lines to make Qlik selections
Customize everything: line style, dot size, colors, labels, grid, tooltips
Use Cases
The pattern is always the same: entities, time periods, and a measure.
Sales: rep leaderboard over months
Marketing: brand awareness tracking, campaign effectiveness over quarters
Finance: fund performance rankings
HR: employee performance trends, team productivity rankings
Operations: supplier quality rankings, efficiency metric evolution
Sports: league standings, player rankings across a season, F1 race position over laps
How to set it up
Add two dimensions and one measure:
Dimension 1: the entity to rank (Product, Sales Rep, Team, Driver)
Dimension 2: the time period (Quarter, Month, Week, Lap)
Measure: the value that determines rank (Sum(Sales), Avg(Score), Max(Position))
I have attached a Text file with some sample datasets that you can inline-load in a new app to test it.
Quick tour of the object properties
The property panel has clear labels for which dimension goes on which axis. Here are the sections that matter most:
Chart Settings: Maximum Entities (2-30, sweet spot is 8-15), Rank Direction (highest or lowest value = #1), Line Style (smooth, straight, step).
Position Mode: Instead of auto-calculating ranks, use the raw measure value as the Y position. Great for race data where position is already known.
Labels: Position them on the right, left, or both sides. Rank change indicators (triangle arrows) show how many spots an entity moved. Clicking a label triggers a Qlik selection.
Highlight: Automatically highlight top N, bottom N, or biggest movers with a configurable glow color.
Colors: Six built-in palettes (Vibrant, Qlik Classic, Category 10, Extended 20, Cool, Warm) plus a custom background color.
Animation: Toggle transitions on/off, adjust duration (100-1500ms), and a "Replay Animation" button to re-run the line-drawing effect.
Selections: Toggle selections on/off
The extension folder structure
qlik-bump-chart/ qlik-bump-chart.qext Extension definitions qlik-bump-chart.js Main entry (Qlik lifecycle, paint, selections) qlik-bump-chart-properties.js Property panel definition qlik-bump-chart-renderer.js D3 rendering engine qlik-bump-chart-data.js Data processing, ranking, time sorting qlik-bump-chart-constants.js Defaults, palettes, timing values qlik-bump-chart-style.css All CSS
lib/ d3.v7.min.js D3.js v7
The main JS handles the Qlik lifecycle.
The data module processes the hypercube, sorts time periods, and calculates ranks.
The renderer draws everything with D3.
The properties file defines the right-panel UI.
The constants file holds all defaults, color palettes, and timing values in one place.
Tips
Limit entities to 8-15 for readability. More than that and lines become hard to follow.
Use Highlight Mode to draw attention to the story (top performers, biggest movers).
Position Mode is great for pre-calculated position data like F1 race results.
Pair with a table or KPI object nearby for exact values at glance.
Let me know in the comments if you have any questions or feedback!
Thanks for reading.
Ouadie
...View More
The write table was introduced to Qlik Cloud Analytics last month so in this blog post, I will review how it works and how it can be added to an app. The write table looks like the straight table but editable columns can be added to it to update or add data. The updated/added data is visible by other users of the app provided they have the correct permissions. Read more on write table permissions here. Something else to note, if using a touch screen device, is you will have to disable touch screen mode for the write table to work. Looking at the write table for the first time, I found it intuitive and easy to use. Let’s create a write table with some editable columns to see how easy it is.
The write table object can be added to a sheet like any other visualization. Once it is added, columns can be added the same way dimensions and measures are added to a straight table. Below is a small write table with course information including the course ID, course name, instructor and location.
To add an editable column from the properties panel, click on the plus sign (+) and select Editable column.
The new editable column will be added. In the properties for the column, the title for the column can be modified and from the show content drop down, manual user input or single selection can be selected. Manual user input will create a free form column that the user can type into. The single selection option will allow me to create a drop-down list of options that the user can choose from.
I will change the title to Course Level and for show content I will select single selection and add three list items by typing the list item and then clicking on the plus sign to add it to the list. The list items will be displayed in the drop-down in the order they are added but can be rearranged by hovering over the list-item and dragging it to the desired position. List-items can also be deleted by hovering over it and clicking the delete icon that appears to the left.
When you come out of edit mode, the message below will appear for the editable column prompting you to define a set of primary keys.
Once you click Define, you will see the pop-up below where you can select the column(s) that will be used for the unique primary key. This is necessary to save and map the data entered in the editable column to the data model. I will select the CourseID column as the primary key.
Once this is done, I will see the Course Level column with the drop-down of list-items I added.
Let’s add one more editable column that takes manual user unput and name it Notes.
As I add data or update the editable columns, the cells will be flagged orange to indicate that my edits have not been saved. Once I save the table, they will be flagged green and any new values entered are visible to other users. A cell will be blue if another user is currently making changes to the row, thus locking it. Changes are saved for 90 days in a change store (temporary storage location) provided by Qlik. After 90 days, the data will be deleted. It is also important to note that if an editable column is deleted, the data will be lost. This is also the case if the primary key used for the editable column is removed.
It is possible to retrieve the changes from a change store via the change-stores API or an automation. Using the REST connection and the change-store API, the changes made in a write table can be retrieved and stored in a QVD (if needed for more than 90 days) or added to the data model for use in other analytics. Qlik Automate can also be used to retrieve data from the change-store using the List Current Changes From Change Store block or the List Change Store History block. From there the data can be stored permanently in an external system for later use or used in the automation for another process. Qlik Help offers steps for retrieving data from a change-store.
The write table can make it easy for users to add updates, feedback and important information that may not be available in the data model. Not only can this be done quickly, but it can be immediately visible to other colleagues. Learn more about the write table in the Product Innovation blog along with links to videos and write table FAQs.
Thanks,
Jennell
...View More
If you have been building custom web applications or mashups with Qlik Cloud, you have likely hit the "10K cells ceiling" when using Hypercubes to fetch data from Qlik.(Read my previous posts about Hypercubes hereandhere)
You build a data-driven component, it works perfectly with low-volume test data, and then you connect it to production; and now suddenly, your list of 50,000+ customers cuts off halfway, or your export results look incomplete.
This happens because the Qlik Engine imposes a strict limit on data retrieval: a maximum of 10,000 cells per request. If you fetch 4 columns, you only get 2,500 rows (4 (columns) x 2500 = 10,000 (max cells)).
In this post, I’ll show you how to master high-volume data retrieval using the two strategies: Bulk Ingest and On-Demand Paging, usingthe@qlik/api library.
What is the 10k Limit and Why Does It Matter?
The Qlik Associative Engine is built for speed and can handle billions of rows in memory. However, transferring that much data to a web browser in one go would be inefficient. To protect both the server and the client-side experience, Qlik forces you to retrieve data in chunks.
Understanding how to manage these chunks is the difference between an app that lags and one that delivers a good user experience.
Step 1: Defining the Data Volume
To see these strategies in action, we need a "heavy" dataset. Copy this script into your Qlik Sense Data Load Editor to generate 250,000 rows of transactions (or download the QVF attached to this post):
// ============================================================
// DATASET GENERATOR: 250,000 rows (~1,000,000 cells)
// ============================================================
Transactions:
Load
RecNo() as TransactionID,
'Customer ' & Ceil(Rand() * 20000) as Customer,
Pick(Ceil(Rand() * 5),
'Corporate',
'Consumer',
'Small Business',
'Home Office',
'Enterprise'
) as Segment,
Money(Rand() * 1000, '$#,##0.00') as Sales,
Date(Today() - Rand() * 365) as [Transaction Date]
AutoGenerate 250000;
Step 2: Choosing Your Strategy
There are two primary ways to handle this volume in a web app. The choice depends entirely on your specific use case.
1- Bulk Ingest (The High-Performance Pattern)
In this pattern, you fetch the entire dataset into the application's local memory in iterative chunks upon loading.
The Goal: Provide a "zero-latency" experience once the data is loaded.
Best For: Use cases where users need to perform instant client-side searches, complex local sorting, or full-dataset CSV exports without waiting for the Engine.
2- On-Demand (The "Virtual" Pattern)
In this pattern, you only fetch the specific slice of data the user is currently looking at.
The Goal: Provide a near-instant initial load time, regardless of whether the dataset has 10,000 or 10,000,000 rows as you only load a specific chunk of those rows at a time.
Best For: Massive datasets where the "cost" of loading everything into memory is too high, or when users only need to browse a few pages at a time.
Step 3: Implementing the Logic
While I'm using React and custom react hooks for the example I'm providing, these core Qlik concepts translate to any JavaScript framework (Vue, Angular, or Vanilla JS). The secret lies in how you interact with the HyperCube.
The Iterative Logic (Bulk Ingest):
The key is to use a loop that updates your local data buffer as chunks arrive.
To prevent the browser from freezing during this heavy network activity, we use setTimeout to allow the UI to paint the progress bar.
qModel = await app.createSessionObject({ qInfo: { qType: 'bulk' }, ...properties });
const layout = await qModel.getLayout();
const totalRows = layout.qHyperCube.qSize.qcy;
const pageSize = properties.qHyperCubeDef.qInitialDataFetch[0].qHeight;
const width = properties.qHyperCubeDef.qInitialDataFetch[0].qWidth;
const totalPages = Math.ceil(totalRows / pageSize);
let accumulator = [];
for (let i = 0; i < totalPages; i++) {
if (!mountedRef.current || stopRequestedRef.current) break;
const pages = await qModel.getHyperCubeData('/qHyperCubeDef', [{
qTop: i * pageSize,
qLeft: 0,
qWidth: width,
qHeight: pageSize
}]);
accumulator = accumulator.concat(pages[0].qMatrix);
// Update state incrementally
setData([...accumulator]);
setProgress(Math.round(((i + 1) / totalPages) * 100));
// Yield thread to prevent UI locking
await new Promise(r => setTimeout(r, 1));
The Slicing Logic (On-Demand)
In this mode, the application logic simply calculates the qTop coordinate based on the user's current page index and makes a single request for that specific window of data (rowsPerPage).
const width = properties.qHyperCubeDef.qInitialDataFetch[0].qWidth;
const qTop = (page - 1) * rowsPerPage;
const pages = await qModelRef.current.getHyperCubeData('/qHyperCubeDef', [{
qTop,
qLeft: 0,
qWidth: width,
qHeight: rowsPerPage
}]);
if (mountedRef.current) {
setData(pages[0].qMatrix);
}
I placed these two methods in custom hooks (useQlikBulkIngest & useQlikOnDemand) so they can be easily re-used in different components as well as other apps.
Best Practices
Regardless of which pattern you choose, always follow these three Qlik Engine best practices:
Engine Hygiene (Cleanup): Always call app.destroySessionObject(qModel.id) when your component or view unmounts.
Cell Math: Always make sure your qWidth x qHeight is strictly < 10,000. For instance, if you have a wide table (20 columns), your max height is only 500 rows per chunk.
UI Performance: Even if you use the "Bulk" method and have 250,000 rows in JavaScript memory, do not render them all to the DOM at once. Use UI-level pagination or virtual scrolling to keep the browser responsive.
Choosing between Bulk and On-Demand is a trade-off between Initial Load Time and Interactive Speed. By mastering iterative fetching with the @qlik/api library, you can ensure your web apps remain robust, no matter how much data is coming in from Qlik.
Attached is the QVF and here is theGitHub repository containing the full example in React so you can try it in locally - Instructions are provided in the README file.
(P.S:Make sure you create the OAuth client in your tenant and fill in the qlik-config.js file in the project with your tenant-specific config).
Thank you for reading!
...View More
As we enter the last month of the year, let’s review some recent enhancements in Qlik Cloud Analytics visualizations and apps. On a continuous cycle, features are being added to improve usability, development and appearance. Let’s’ look at a few of them.
Straight Table
Let’s begin with the straight table. Now, when you create a straight table in an app, you will have access to column header actions, enabled by default. Users can quickly sort any field by clicking on the column header. The sort order (ascending or descending) will be indicated by the arrow. Users can also perform a search in a column by clicking the magnifying glass.
When the magnifying glass icon is clicked, the search menu is displayed as seen below.
If a cyclic dimension is being used in the straight table, users can cycle through the dimensions using the cyclic icon that is now visible in the column heading (see below).
When you have an existing straight table in an app, these new features will not be visible by default but can easily be enabled in the properties panel by going to Presentation > Accessibility and unchecking Increase accessibility.
Bar Chart
The bar chart now has a new feature that allows the developer to set a custom width of the bar when in continuous mode. Just last week, I put the bar chart below in continuous mode and the bars became very thin as seen below.
But now, there is this period drop down that allows developers to indicate the unit of the data values.
If I select Auto to automatically detect the period, the chart looks so much better.
Combo Chart
In a combo chart, a line can now be styled using area versus just a line, as displayed below.
Sheet Thumbnails
One of the coolest enhancements is the ability to auto-generate sheet thumbnails. What a time saver. From the sheet properties, simply click on the Generate thumbnail icon and the thumbnail will be automatically created. No more creating the sheet thumbnails manually by taking screenshots and uploading them and assigning them to the appropriate sheet. If you would like to use another image, that option is still available in the sheet properties.
From this To this in one click
Try out these new enhancements to make development and analysis faster and more efficient.
Thanks,
Jennell
...View More
Regex has been one of the most requested features in Qlik Sense for years, and now it’s finally here.
With this year's May 2025 release, Qlik added native support for regular expressions in both load scripts and chart expressions. That means you can validate formats, extract values, clean up messy text, and more, all without complex string logic or external preprocessing.
In this post, we’ll look at what’s new, how it compares to the old workarounds, and a practical example you can plug into your own app.
The New Regex Functions
Regex (short for Regular Expression) is a compact way to define text patterns. If you’ve used it in Python, JavaScript, or other programming languages, the concept will feel familiar.
Qlik now includes native support for regular expressions with functions that work in load scripts and chart expressions:
MatchRegEx() – check if a value matches a pattern
ExtractRegEx() – extract the first substring that matches
ReplaceRegEx() – search and replace based on a pattern
SubFieldRegEx() – split text using regex as the delimiter
There are also group-based versions (ExtractRegExGroup, etc.), case-insensitive variants (MatchRegExI), and helpers like CountRegEx() and IsRegEx().
🔗 Help Article
Replacing Old Patterns
Here's where regex saves time:
Format validation:Replace nested Len(), Left(), and Mid() with a single pattern.
Substring extraction:Skip the manual slicing; let the pattern do the work.
Pattern-based replacements:Clean or reformat values without chaining multiple functions.
With regex:
If(MatchRegEx(Code, '^[A-Z]{2}-\d{5}$'), 'Valid', 'Invalid') // check format
ExtractRegEx(Text, '\d{5}') // get first 5-digit number
ReplaceRegEx(Field, '\D', '') // strip non-digits
Cleaner logic. Fewer steps. Easier to maintain.
Use Cases That Just Got Easier
If any of the following sound familiar, regex will help:
Format checks: postal codes, product SKUs, ID numbers.
Data extraction: get domain from email, number from notes, etc.
PII masking: hide parts of a SSN or credit card.
String cleanup: strip unwanted characters, normalize spacing.
Splitting tricky fields: CSV lines with quoted commas, mixed delimiters.
Keep in mind that these functions can be used directlyin chart expression, so you can build visuals or filters based on pattern logic, not just static values.
Example: Clean and Validate Phone Numbers
Let’s say you’ve got a bunch of phone numbers like this:
(312) 678-4412
312-678-4412
3126784412
123-678-4412 // invalid: area code starts with 1
312-045-4412 // invalid: exchange starts with 0
312-678-441 // invalid: too short
You want to:
Validate that it’s a proper 10-digit North American number
Standardize the format to (###) ###-####
Here’s how to do it with regex in your load script:
LOAD
RawPhone,
// 1. Strip out anything that's not a digit
ReplaceRegEx(RawPhone, '\D', '') as DigitsOnly,
// 2. Validate: 10 digits exactly, starting with 2–9
If(MatchRegEx(RawPhone, '^\(?[2-9]\d{2}\)?[-.\s]?\d{3}[-.\s]?\d{4}$'),
'Valid', 'Invalid') as Status,
// 3. Standardize format to (###) ###-####
ReplaceRegEx(
ReplaceRegEx(RawPhone, '\D', ''),
'(\d{3})(\d{3})(\d{4})',
'(\1) \2-\3'
) as FormattedPhone
INLINE [
RawPhone
3025557890
(404) 222-8800
678.333.1010
213 888 9999
1035559999
678-00-0000
55577
];
Result:
One pattern replaces multiple conditions and formatting is consistent. This is much easier to maintain and easy to expand if the rules change.
Final Thoughts
Use regex where it adds value.For simple cases like Left() or Trim(), stick with built-in string functions.
When you're working with inconsistent inputs, embedded formats, or anything that doesn’t follow clean rules, regex can save a lot of time.
If you're applying regex across large datasets, especially in charts, it’s better to handle it in the load script where possible.
Not sure how to write the pattern?
Tools like regex101.comor regexr.comare great for testing and adjusting before you build in Qlik.
With native regex in Qlik Sense, you can now clean, validate, extract, and transform text with precision without convoluted scripts or third-party tools. It’s a quiet but powerful upgrade that unlocks a ton of flexibility for real-world data.
...View More