Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Forums for Qlik Analytic solutions. Ask questions, join discussions, find solutions, and access documentation and resources.
Forums for Qlik Data Integration solutions. Ask questions, join discussions, find solutions, and access documentation and resources
Qlik Gallery is meant to encourage Qlikkies everywhere to share their progress – from a first Qlik app – to a favorite Qlik app – and everything in-between.
Get started on Qlik Community, find How-To documents, and join general non-product related discussions.
Direct links to other resources within the Qlik ecosystem. We suggest you bookmark this page.
Qlik gives qualified university students, educators, and researchers free Qlik software and resources to prepare students for the data-driven workplace.
Hello Qlik Users,
As announced previously (see Qlik Automate execution token changes), execution tokens will become header parameters on February 1st, 2026.
When triggering a triggered automation through the trigger URL (see the endpoint below), the execution token must be sent as a header parameter. Currently, it is possible to send the execution token as a query parameter. Starting February 1st, 2026, sending execution tokens as header parameters will be enforced.
api/v1/automations/{id}/actions/execute
Don't hesitate to reach out if you have any questions or address our experts directly in the Qlik Automate forum.
Thank you for choosing Qlik,
Qlik Support
Update March 4th, 2026: added link to How to get Talend Management Console task schedules and pause and resume during a maintenance window using the API article
Updated April 24th, 2026: added impact on APIs (all down) and additional clarification on why tasks must be stopped and the impact on remote engines
Updated May 7th, 2026: added additional information on how to address Remote Engine impact
Updated May 12th, 2026: the anticipated impact for the remaining maintenance window has increased from 30 minutes to 90 minutes
Talend Cloud and Talend Management Console will undergo scheduled maintenance in March, April, and May. This infrastructure modernization is a key step in unifying the Talend ecosystem with Qlik.
The alignment paves the way for a more seamless experience across both platforms. Over the coming months, you will gain access to integrated features that bridge data integration and analytics, enabling unified governance and a streamlined management experience across your entire data lifecycle.
The maintenance windows will occur per region, during off-peak hours, and are expected to have a maximum of 90 minutes of effective downtime.
A full outage of Talend Cloud and Talend Management Console for a duration of up to 90 minutes within a preplanned 4-hour window.
The following applications will not be accessible:
All APIs for Talend Cloud will not be available during the outage. APIs impacted:
In detail:
Looking for information on how to identify, pause, and resume your tasks? See How to get Talend Management Console task schedules and pause and resume during a maintenance window using the API.
In some instances, Remote Engines might require a restart if marked as unavailable in the Talend Management Console or if tasks cannot be executed as expected.
If restarting the Remote Engine does not resolve the complication, follow the pairing instructions in Pairing Remote Engines using a dedicated web service to reset the key and re-pair the Remote Engine.
If your Remote Engine Gen2 is unavailable or cannot execute tasks, then:
Each region will undergo maintenance for 4 hours during off-peak hours, with a maximum of 90 minutes of effective downtime.
| Region | Maintenance Start | Maintenance End |
| Talend Cloud - AWS - Asia Pacific (Sydney) au.cloud.talend.com |
UTC: 25/03/26 - 11:00 |
UTC: 25/03/26 - 15:00 |
| Talend Cloud - AWS - Asia Pacific (Tokyo) ap.cloud.talend.com |
UTC: 20/04/26 - 13:00 |
UTC: 20/04/26 - 17:00 |
| Talend Cloud - AWS - US East (N. Virginia) us.cloud.talend.com |
UTC: 27/04/26 - 6:00 |
UTC: 27/04/26 - 10:00 |
| Talend Cloud - AWS - Europe (Frankfurt) eu.cloud.talend.com |
UTC: 26/05/26 - 19:00 |
UTC: 26/05/26 - 23:00 |
To identify which region your tenant is affected by, cross-reference Accessing Talend Cloud applications.
To track further updates during the scheduled Qlik Cloud Maintenance, please visit our Qlik Cloud Status page. This blog post will be updated with additional information where necessary.
Thank you for choosing Qlik,
Qlik Support
来たる 6/10(水)、「AI Reality Tour Tokyo 2026」を開催いたします。
本イベントでは、Qlik のエキスパートによる基調講演、Qlik ユーザーの先進的な事例、Qlik 技術部門による最新の製品情報、Qlik のパートナー企業による最新のソリューションや展示ブースなど、AI がもたらす価値と現実とのギャップを解消し、AI を実現・加速・適応する最先端のソリューションをご紹介します。
お申し込みの締め切りは、6月 2日(火)17:00 までです。お早めにお申し込みください。
【開催概要】
日時:2026年 6月 10日(水)13:00 - 18:30(受付開始 12:00)
懇親会 18:30 - 19:30
会場:有明セントラルタワーホール&カンファレンス
東京都江東区有明3-7-18 有明セントラルタワー3F・4F
参加費:無料
お問い合わせ:Marketingjp@qlik.com までお問い合わせください。

Well-known from Power BI, decomposition trees aren't available in Qlik natively. This extension fills that gap — letting users drill down across multiple dimensions in any order, with AI Splits automatically surfacing the highest and lowest impact factors on any measure.

A hands-on feature walkthrough: AI Splits, flex dimension ordering, multiple measures, conditional coloring, negative value handling, three bar scaling modes, zooming, and paging. Everything configured and ready to inspect in edit mode.

Qlik developers and BI consultants looking to add root cause analysis and ad hoc exploration to their Sense apps.

Built with AnyChart's Decomposition Tree extension for Qlik Sense / Qlik Cloud, based on fictional business data.
This year, in April, we inaugurated two new Centers of Excellence ( CoE) under the Qlik Academic Program, in the Silicon Valley of India, i.e Bangalore. The new CoEs mark a new beginning for training and skilling students in Qlik technologies along with other activities like datathons.
Reva University is one of the leading Universities in the State of Karnataka and is ranked among the top universities in the region. The School of Computer Science and Engineering took the lead in this initiative and has established the CoE. Strategic Partner of the Qlik Academic Program, ICT Academy established the connection with Reva and ensured that arrangements were made as per the requirements of the CoE.
The second CoE was established in Sai Vidya Institute of Technology ( SVIT) which is a well known institution for engineering students. Many students have earned their degree qualification from here. The Department of Computer Science Engineering have been coordinating to establish the CoE. Many initiatives are planned by SVIT to take this engagement ahead.
The previous CoEs are functioning successfully in VJIT Hyderabad, Anurag University Hyderabad and Kristu Jayanti University Bangalore. Many students have got trained and qualified from the CoEs here. Along with this, they have hosted various events including datathons successfully.
We hope to establish more CoEs this year and create a physical space for students to get trained under the Qlik Academic Program.
To learn more about the academic program, please visit: qlik.com/academicprogram
It's May — and just like a certain galaxy far, far away, things are heating up. The Qlik Replicate May 2026 Technical Preview has landed, and it's ready for you to put through its paces.
For those of you who live and breathe data replication, this is your moment to get ahead of the curve before general availability. The Technical Preview is available to download now
[Select Product Category: Qlik Data Integration, Product: Qlik Replicate, Release Number: Technical Preview]
Now let's get into what's new.
What's in the May 2026 Technical Preview?
This release brings a couple of notable additions worth paying attention to:
As always with a Technical Preview, the clue is in the name — this is your opportunity to explore, test, and feed back before the full release. Think of it as the dress rehearsal, not opening night.
Before You Upgrade — A Quick But Important Note
Please take a few minutes to review the known issues before proceeding with any upgrade. No one enjoys a surprise mid-pipeline, even in test environments.
The full documentation and release notes for both Qlik Replicate and Qlik Enterprise Manager are available here:
The docs are your friend here — treat them as such.
Get Involved
Technical Previews are only as good as the people who test them. If you hit something unexpected or spot something worth improving, drop your feedback in the comments below or raise it through the Community. Your input directly shapes what ships.
May the data flow — and may your upgrades go smoothly. Happy testing.
The following two Qlik Talend Administration Center security issues have been identified and subsequently resolved. Patches are already available.
A broken access control issue has been identified in Qlik Talend Administration Center, which allows a user with View permission to modify the Qlik Talend Studio update URL.
Affected Software
See Security fix for Qlik Talend Administration Center URL access control vulnerability (CVE-2026-pending) for details.
A stored cross-site scripting security issue in the Qlik Talend Administration Center has been identified.
Affected Software
See Security fix for Qlik Talend Administration Center cross-site scripting vulnerability (CVE-2026-pending) for details.
Upgrade at the earliest. The following table lists the patch versions addressing the vulnerabilities.
Always update to the latest version. Before you upgrade, check if a more recent release is available.
| Product | Patch | Release Date |
| Qlik Talend Administration Center URL access control vulnerability |
QTAC-1471 | November 21, 2025 |
| Qlik Talend Administration Center cross-site scripting vulnerability |
QTAC-1883 | January 23, 2026 |
Thank you for choosing Qlik,
Qlik Support
Qlik introduced a change in how automation permissions are handled for the Analytics Admin role.
The change is already live as of the 11th of May, 2026.
Analytics Admins can now claim ownership of another user's automation. After claiming ownership, they can make necessary changes to it and enable the automation. However, they can no longer transfer ownership to another user.
As an Analytics admin, to claim ownership of an automation:
This behavior change only applies to Analytics Admins. Tenant admins can still transfer ownership to any user with the appropriate access rights in the tenant.
If you have any questions, we're happy to assist. Reply to this blog post or take your queries to our Support Chat.
Thank you for choosing Qlik,
Qlik Support

We have been able to corroborate the model's accuracy using real data from the first ULEZ expansion, so we are confident it will predict the second expansion effects effectively.

This app and the ML experiment behind it has served as an internal demonstration of Qlik's machine learning capabilities, making its adoption easier.

Our internal data science team. It serves as a proof of concept for Qlik Predict.

An effective use case of machine learning in its prediction mode.
Most enterprise AI projects don’t fail because the model is wrong. They fail because the data isn’t ready. Data engineering leaders are now being asked to support a new wave of generative and agentic workloads that demand fresher data, broader source coverage, tighter governance, and richer context than traditional BI ever required — and to deliver it without growing the team.
Qlik Talend Cloud Data Integration was built to close that gap. It provides a single, governed pipeline from operational sources to an open lakehouse — and on to the vector indexes, feature stores, and APIs that your AI systems actually consume. Combined with Qlik Open Lakehouse on Apache Iceberg, it turns your AI inputs into reusable AI data products: named, versioned, governed assets that any RAG application or agent can consume off the shelf.
This post walks through the reference architecture, the pipeline that produces those data products, and a worked example that takes raw CRM and product data all the way to a working RAG copilot and an agentic workflow — both running off the same Iceberg foundation.
Why data is the bottleneck for enterprise AI
GenAI and agentic systems are not fundamentally different consumers of data, but they are far more demanding ones. A model is only as accurate, current, and trustworthy as the context it retrieves at inference time. For data engineering leaders, that translates into six hard requirements:
Meeting all six at once with one-off pipelines is what kills enterprise AI velocity. The path forward is consolidation: one governed integration platform feeding one open lakehouse, with the Gold zone publishing reusable AI data products that any model, agent, or analyst can consume. Build once, govern once, serve many.
Qlik Talend Cloud + Iceberg: a reference architecture
The architecture has four layers: sources, integration, an open Iceberg lakehouse with medallion zones, and an AI serving layer. Qlik Talend Cloud handles change data capture, transformation, quality, and catalog metadata across the entire flow. The Gold zone is where curated outputs are published as named AI data products.
|
Two design choices make this architecture work for AI specifically. First, the integration layer is real-time by default — log-based CDC keeps Bronze and Silver tables current without batch windows. Second, Gold is treated as a publishing surface, not a staging area. Each Gold data product is named, versioned, governed, and discoverable in the catalog. RAG and agents become two interfaces over the same products: built once, governed once, consumed many times. |
|
Figure 1. Reference architecture: Qlik Talend Cloud + open Iceberg lakehouse, serving RAG, agentic, and analytics workloads from the same governed Gold layer.
The pipeline: from raw data to AI use
The pipeline that operates on the architecture above runs in six stages — automated end-to-end, with quality and lineage enforced at every step. Each stage produces a more refined and trusted asset. Bronze preserves raw, append-only CDC for replay and audit. Silver applies data quality rules, deduplication, masking, and Type-2 history. Gold publishes AI data products: a document product (chunk-friendly text + metadata) for RAG, and a state product (curated entity, feature, and policy data) for agents. Both are versioned and registered, so consumers — vector indexers, semantic APIs, BI engines — read the same governed truth.
Figure 2. The six-stage pipeline. Because every stage writes to Iceberg, downstream consumers — vector indexers, semantic APIs, BI engines — read the same governed truth.
Worked example: from CRM tickets to a customer-support agent
Picture a data engineering team chartered with delivering an AI-powered customer-support assistant. The use case has both a RAG side (deflecting common questions with vetted answers) and an agentic side (the assistant can look up customer status, open tickets, and trigger actions). The raw inputs are typical:
The pipeline at work
Powering RAG
When a customer asks “Why was my last bill higher than usual?”, the copilot retrieves the top-k chunks from the rag_documents data product, filtered by the customer’s product entitlement — with a structured lookup against agent_state for the customer’s current invoice context. Because the underlying data products are continuously refreshed by Qlik Talend Cloud, the copilot cites guidance that reflects the current pricing schedule, not last month’s. Every retrieved chunk carries its lineage, so answers can be traced back to a specific source row in Salesforce or a specific KB article version.
Powering agentic workflows
For agentic flows, the assistant plans and executes multi-step tasks against the same agent_state product: confirm identity, check entitlement, open a case in Salesforce via a write-back tool, and escalate to a human agent if confidence drops below a threshold defined in policy_rules. Every step is recorded in the audit_log table for explainability. The agent’s tools are backed by exactly the same data products the RAG side uses — which means a behavior change in the data, like a new product or pricing tier, propagates to both surfaces immediately, with no parallel pipelines and no copy-paste schemas. RAG and agents really are two interfaces over one set of products.
From pipeline to production: your next move
The fastest enterprise AI programs aren’t the ones with the cleverest prompts or the largest models. They’re the ones treating AI data products as the unit of delivery. Qlik Talend Cloud and Qlik Open Lakehouse give your team three things at once: real-time movement of broad source data, governed transformation into named and versioned data products, and an open Iceberg foundation that any model, framework, or agent can plug into. Build once, govern once, serve both RAG and agents from the same products.
A 10–15 day starting sprint for data engineering leaders:
Talk to your Qlik team. Ask about the AI-ready data solution templates — pre-built pipeline patterns for the most common GenAI and agentic use cases, including the customer-service pattern walked through above.
Native Qlik Open Lakehouse interoperability for Talend Studio
With the March release, Talend Studio introduces native support for querying Qlik Open Lakehouse datasets through Amazon Athena — available in both Standard Data Integration jobs and Spark-based Big Data workflows.
This means developers can now connect to Qlik Open Lakehouse data, execute SQL queries, and integrate results downstream the Talend job without manual JDBC configuration or custom setup.
Connecting Talend Studio to Qlik Open Lakehouse
Talend Studio now connects natively to Qlik Open Lakehouse through Amazon Athena — a SQL query engine that runs directly on top of cloud storage, enabling access to Iceberg-managed data without data movement or duplication. Developers can:
Reliable by Design
Connecting to Qlik Open Lakehouse from Talend Studio is straightforward by design. The integration ships with dedicated Athena configuration and input components, eliminating manual setup. Runtime validation, improved error handling, and secure credential management ensure the connection remains stable and trustworthy in production environments.
How Data is Organized in Qlik Open Lakehouse
In Qlik Open Lakehouse, data is ingested incrementally and accumulated in Apache Iceberg tables. A logical abstraction layer — implemented as Trino views — resolves those changes into a consolidated latest-state representation, which different engines can query without handling change consolidation logic directly.
This model supports two complementary data patterns:
Both patterns are available across Standard Data Integration and Big Data jobs in Talend Studio, enabling teams to work with Qlik Open Lakehouse data in the way that best suits their use case.
Looking Ahead
This integration enables Talend Studio users to access Qlik Open Lakehouse data without changing their existing workflows — while aligning with modern, open-format architectures that support multiple query engines.
Athena is the first fully supported access path in this model, with a roadmap to extend support to additional engines over time. For organizations moving away from traditional data warehouses or adopting multi-engine strategies, this represents a concrete step toward a more flexible data architecture.
Don't miss our latest Q&A with Qlik! Pull up a chair and chat with our panel of experts to help you get the most out of your Qlik experience.
The write table was introduced to Qlik Cloud Analytics last month so in this blog post, I will review how it works and how it can be added to an app. The write table looks like the straight table but editable columns can be added to it to update or add data. The updated/added data is visible by other users of the app provided they have the correct permissions. Read more on write table permissions here. Something else to note, if using a touch screen device, is you will have to disable touch screen mode for the write table to work. Looking at the write table for the first time, I found it intuitive and easy to use. Let’s create a write table with some editable columns to see how easy it is.
The write table object can be added to a sheet like any other visualization. Once it is added, columns can be added the same way dimensions and measures are added to a straight table. Below is a small write table with course information including the course ID, course name, instructor and location.
To add an editable column from the properties panel, click on the plus sign (+) and select Editable column.
The new editable column will be added. In the properties for the column, the title for the column can be modified and from the show content drop down, manual user input or single selection can be selected. Manual user input will create a free form column that the user can type into. The single selection option will allow me to create a drop-down list of options that the user can choose from.
I will change the title to Course Level and for show content I will select single selection and add three list items by typing the list item and then clicking on the plus sign to add it to the list. The list items will be displayed in the drop-down in the order they are added but can be rearranged by hovering over the list-item and dragging it to the desired position. List-items can also be deleted by hovering over it and clicking the delete icon that appears to the left.
When you come out of edit mode, the message below will appear for the editable column prompting you to define a set of primary keys.
Once you click Define, you will see the pop-up below where you can select the column(s) that will be used for the unique primary key. This is necessary to save and map the data entered in the editable column to the data model. I will select the CourseID column as the primary key.
Once this is done, I will see the Course Level column with the drop-down of list-items I added.
Let’s add one more editable column that takes manual user unput and name it Notes.
As I add data or update the editable columns, the cells will be flagged orange to indicate that my edits have not been saved. Once I save the table, they will be flagged green and any new values entered are visible to other users. A cell will be blue if another user is currently making changes to the row, thus locking it. Changes are saved for 90 days in a change store (temporary storage location) provided by Qlik. After 90 days, the data will be deleted. It is also important to note that if an editable column is deleted, the data will be lost. This is also the case if the primary key used for the editable column is removed.
It is possible to retrieve the changes from a change store via the change-stores API or an automation. Using the REST connection and the change-store API, the changes made in a write table can be retrieved and stored in a QVD (if needed for more than 90 days) or added to the data model for use in other analytics. Qlik Automate can also be used to retrieve data from the change-store using the List Current Changes From Change Store block or the List Change Store History block. From there the data can be stored permanently in an external system for later use or used in the automation for another process. Qlik Help offers steps for retrieving data from a change-store.
The write table can make it easy for users to add updates, feedback and important information that may not be available in the data model. Not only can this be done quickly, but it can be immediately visible to other colleagues. Learn more about the write table in the Product Innovation blog along with links to videos and write table FAQs.
Thanks,
Jennell
Today I want to introduce you to a gem that you may be missing out on. It is the Do More with Qlik community forum lead by @Michael_Tarallo . This forum is made up of concise videos that cover everything from Qlik capabilities to innovative ways to solve business challenges. It is for users of all levels, beginners to seasoned Qlik users, with a wide range of topics. Check out this introductory video to learn more and bookmark the forum. You do not want to miss out on this!
Thanks,
Jennell
As artificial intelligence continues to transform industries, universities are increasingly exploring how to prepare students for this shift. Modern analytics is no longer only about looking at what happened in the past. It is about identifying patterns, predicting outcomes, automating processes, and helping people make faster and more informed decisions.
One major change is the growing use of conversational analytics. Instead of manually navigating dashboards and filtering reports, users can increasingly ask questions in natural language and receive contextual insights based on trusted data. This makes analytics more accessible to a wider range of users and helps students engage with data in a more intuitive and interactive way.
Another important development is predictive analytics. Rather than only analyzing historical information, students can now learn how to forecast trends, identify anomalies, and anticipate future outcomes using AI-supported tools and techniques. These skills are becoming increasingly valuable across industries such as finance, healthcare, marketing, operations, manufacturing, and supply chain management.
At the same time, the rise of AI is also highlighting the importance of trusted and governed data. AI systems are only as effective as the quality and context of the data behind them. As Qlik highlights in its recent Agentic AI presentation, successful AI depends not only on AI capability itself, but also on trusted data, analytical context, and governance.
This shift creates a valuable opportunity for educators to modernize analytics education and expose students to the technologies and workflows increasingly used in industry. Instead of treating analytics as a static reporting exercise, universities can introduce students to conversational analytics, predictive thinking, AI-assisted insights, and intelligent decision-making.
Through the Qlik Academic Program, accredited university educators and students receive free access to Qlik Sense Cloud, including the full capabilities of a Qlik Sense tenant. This allows students to gain hands-on experience with interactive dashboards, data exploration, AI-powered analytics features, automation, and predictive analytics tools in a real-world environment.
The program also provides free access to Qlik Learning, where educators and students can follow structured learning pathways and complete product qualifications to strengthen their analytics and data literacy skills. In addition, educators receive ready-to-use teaching resources including lesson plans, presentations, exercises, and classroom materials that can help integrate analytics into existing courses more easily.
Importantly, these resources are designed not only for technical programs, but also for business, marketing, operations, finance, and other non-technical disciplines where data literacy is becoming increasingly essential.
As AI continues to reshape the workplace, helping students understand how to work with data, analytics, and AI together will become more important than ever. Universities now have an opportunity to move beyond teaching dashboard creation alone and instead prepare students to become confident, curious, and data-driven decision makers in an increasingly AI-powered world.
If you are interested in learning more about the Qlik Academic Program, feel free to contact us at academicprogram@qlik.com. More information about the program, including how to apply, can be found at qlik.com/academicprogram.
Update, 6th of May 2026: This is now deployed in all regions.
Previously, the /api/v1/apps endpoint could be used to list all apps on a tenant. This method has always been unsupported and undocumented, and will be removed in the first week of May 2026.
If you are currently using /api/v1/apps, switch to GET with /api/v1/items instead.
This can be further filtered by choosing a resource type (such as /items?noActions=true&resourceType=app, /items?noActions=true&resourceType=script, or similar).
For more information, see:
If you have any questions, we're happy to assist. Reply to this blog post or take your queries to our Support Chat.
Thank you for choosing Qlik,
Qlik Support
Several years ago, I wrote a blog post on how to create a profit and loss statement in QlikView. @Patric_Nordstrom has built upon this method and built a financial statement in Qlik Analytics with a straight table and waterfall chart using inline SVG. In this blog post, I will review how he did it.
Here is an example of the financial statement structure.
There are plain rows, such as gross sales and sales return where the amount is the sum of the transactions made against the accounts. There are subtotals such as net sales and gross margin which are a sum of the previous plain rows. And there are also partial sums such as total cost of sales that is a sum of a subset of the previous plain rows, but not all the previous plain rows.
Patric identified two functions, RangeSum() and Above() that are suitable for calculating the subtotal and partial sums in a table. The RangeSum function sums a range of values, and the Above function evaluates an expression at a row above the current row within a column segment in a table. The above function can be used with 2 optional parameters – offset and count to further identify the rows to be used in the expression.
The layout table below is used as a template for the financial statement.
The AC column is included here in the layout file for demo purposes but could be calculated from the accounts and transactions in the data model as well.
In the script, the layout table was loaded, and additional fields were created to support the waterfall chart, specifically offset and count fields to be used with the above function.
Here is a view of the layout table with the new fields that were created in the script.
After the layout table is loaded and the new fields are created, some master measures can be created to be used in the inline SVG expression. Here are the 3 master measures Patric created:
mBar is the bar length with an offset that is always 0.
mStart is the starting position of the bar in the waterfall chart and for subtotals, this is always 0.
mMax is the max bar length which is used to scale the bars in the waterfall chart.
Now the straight table can be created. The RowNr field is added for sorting purposes. The RowTitle field and the AC fields are added to show the account groupings in the financial statement along with their value. The below inline SVG expression for the waterfall chart is the last column added to the straight table. It is made up of 3 parts:
The result of the financial statement looks like this:
To add the text styling (bold and underline) from the layout table, the RowStyle field was added to the text style expression in the RowTitle and AC columns.
Indentation is added by using the repeat function in the RowTitle column. It will repeat a non-breaking space 6 times if there is a tab tag in the RowStyle field. Otherwise, no indentation is done.
If the RowStyle is not blank, a bar is displayed for the waterfall chart and the sum value for the actual amount (or mBar) in this case is displayed.
The chart column representation is set to Image in the properties of the straight table.
While this method looks complex, it is a simple and clean solution for adding a waterfall chart to a financial statement using straight table features and inline SVG. Using the layout table and inline SVG provides room for customization so that the financial report meets the needs and requirements of the user or customer.
Thanks,
Jennell

These shift focus on reducing time to find out best colors and color coding that would improve speed to choose colors, and color combination can improves user sections

The Primary impact of this well designed palette is reducing the mental effort of choosing colors. it helps user to refer multiple colors side by side and check which color and suitable for each dimension. and

All dashboard developers

The Color are always act as a bridge between complex raw data executive decision making. its help to choose right matching colors with color code and help to reduce check each color one by one and see how it looks

Contributors and Keywords

Learning resource

All Qlik stakeholders

Learning resource