Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
By reading the Product Innovation blog, you will learn about what's new across all of the products in our growing Qlik product portfolio.
The Support Updates blog delivers important and useful Qlik Support information about end-of-product support, new service releases, and general support topics.
This blog was created for professors and students using Qlik within academia.
Hear it from your Community Managers! The Community News blog provides updates about the Qlik Community Platform and other news and important announcements.
The Qlik Digest is your essential monthly low-down of the need-to-know product updates, events, and resources from Qlik.
The Qlik Learning blog offers information about the latest updates to our courses and programs, as well as insights from the Qlik Learning team.
Hi everyone,
Want to stay a step ahead of important Qlik support issues? Then sign up for our monthly webinar series where you can get first-hand insights from Qlik experts.
The Techspert Talks session from March looked at Optimizing Qlik Cloud App Performance.
But wait, what is it exactly?
Techspert Talks is a free webinar held on a monthly basis, where you can hear directly from Qlik Techsperts on topics that are relevant to Customers and Partners today.
In this session we will cover:
The following connectors will be removed and are no longer recommended for further use:
This change is driven by Slack’s updated app guidelines and requirements, which now classify exporting message data as unsuitable for external applications. The Qlik Automation and Qlik Talend Cloud Slack connectors are unaffected.
The Facebook Insights connector will be deprecated at the same time.
The removal timeline is as follows:
If you have any questions, do not hesitate to contact us through the Qlik Customer Portal.
Thank you for choosing Qlik,
Qlik Support
Qlik Cloud will undergo scheduled maintenance in March 2026. We’re upgrading and scaling our infrastructure to deliver a faster, more reliable, and seamless experience for you. These improvements strengthen performance, enhance stability, and ensure our platform continues to grow with your needs.
The maintenance windows will occur per region and are expected to last a maximum of 60 minutes.
Qlik Cloud will undergo scheduled maintenance that includes two separate impacts:
Qlik Cloud will experience a functional degradation of 30 minutes, during which its Identity Services are impacted:
Impacted Regions:
During this time:
No other impact is expected. Existing users can continue to log in and use their assigned Qlik Sense applications as normal. Automation and report features will continue to function without interruption.
If the existing user’s IdP (Identity Provider) information has changed, they may not be able to log in during the maintenance window. You may see the error BAD-GATEWAY, Invalid response from the upstream service.
Modifying roles or permissions during the maintenance window leads to a Failed to update role error with the error code IDENTITIES-10405.
A full outage of Qlik Answers during the 60-minute maintenance window. Knowledge base indexing and any queries will fail to run.
Impacted Regions:
Be informed about the upcoming maintenance and alert your userbase if needed. No direct action is required from your end in preparation.
None.
The following tables include the maintenance start time for each affected region. To reiterate, the Qlik Cloud identity services are affected for 30 minutes, while the Qlik Answers maintenance is planned to last 60 minutes.
| Region | Maintenance Start |
| Asia-Pacific (Tokyo) (ap-ne-1) |
|
| Asia-Pacific (Sydney) (ap-se-2) |
|
| Europe (Frankfurt) (eu-c-1) |
|
| Asia-Pacific (Mumbai) (ap-s-1) |
|
| Europe (Ireland) (eu-w-1) |
|
| Europe (London) (eu-w-2) |
|
| Asia-Pacific (Singapore) (ap-se-1) |
|
| North America (N. Virginia) (us-e-1) |
|
To track further updates during the scheduled Qlik Cloud Maintenance, please visit our Qlik Cloud Status page. This blog post will be updated with additional information where necessary.
Thank you for choosing Qlik,
Qlik Support
Hello Qlik Sense Admins!
Are you looking for best practices on how to manage and maintain your Qlik Sense client-managed environment? Then look no further than our Qlik Sense Admin Playbook, which is now available directly on Qlik Help.
The playbook provides you with a repository of administrative best practices, organized by cadence and category for Qlik Sense Enterprise on Windows, and can be found here: Qlik Sense Admin Playbook
It can also be accessed from the help site's top navigation bar > Playbooks (A) > Qlik Sense Administrator Playbook (B) :
Absolutely. To suggest improvements, please use the Leave your feedback here option on whichever page you want to comment on.
Some of you may already be familiar with the playbook's previous iteration, and you will find that the old URL now redirects to the help site accordingly. The old content will be retired later in the year.
Thank you for choosing Qlik,
Qlik Support
ビジネスにおける AI 活用のニーズが高まる中、データを取り巻く環境は複雑化しています。データの役割は「人間が使うためのデータ」から「機械(エージェント)が意思決定をするためのデータ」へと変化しています。一方、AI は「人間が使うツール」から、自律的に思考して行動する「エージェンティック AI(Agentic AI)」へと進化を遂げつつあります。AI 活用の焦点が導入から運用に移行している今、AI による意思決定と行動を信頼し、コストを最適化して将来のビジネスを見据えた柔軟性を維持する必要があります。
2026年 2月、Qlik は、新たなエージェンティック体験を提供する製品を発表しました。今後も急速な変化に対応する革新的な製品をリリースしていきます。
Qlik のデータ分析・データ統合ソリューションが、 どのように AI のパワーをビジネス成果につなげることができるのか?本 Web セミナーでは、クリックテック・ジャパンの技術担当者が、常に革新し続けている Qlik 製品のロードマップをご紹介します。ぜひ、ご参加ください。
In this blog post, I will review some data flow processors that can be used to prepare your data in a data flow. Let’s start by quickly reviewing what a data flow is. In Qlik Cloud Analytics, a data flow is a no-code experience that visually allows you to prepare your data with drag and drop capabilities. It is intuitive and easy to use and does not require the user to have scripting experience. Data flow processors, along with sources and targets, are used to build a data flow. Each processor handles a specific data transformation task. Here you will find a full list of the data flow processors available.
This blog will touch base on a few processors to familiarize you with how they work and how easy they are to use. To begin, a data flow must first be created. There is more than one way to do this. From the Qlik Cloud Analytics catalog, click on the + Create new button and select Data flow or navigate to Prepare data from the menu and click on the add Data flow button at the top of the page.
+ Create new menu
+ Create new
Prepare data
Menu
Data flow
Once you name the new data flow, navigate to the Editor.
On the left, there are sources, processors and targets. The source is the data input, the processors are the data transformation types, and the targets are data outputs. Before we can look at the processors, we need to select our input data from the data catalog or a connection. Once that is in place, we can begin to explore the processor options. There are several data flow processors – too many to review in this blog but I will review three of the them - the Filter processor, the Join processor and the Unpivot processor.
Filter Processor
The filter processor filters data based on a condition. A processor can be added to the data flow canvas by dragging and dropping the processor onto the canvas or by clicking on the menu in the data source and selecting Add processor.
If you drag and drop the processor onto the canvas, you will need to connect the dots between the input and processor. If you add it from your data source menu, the dots will automatically be connected for you.
Each processor has a properties panel where the processor can be configured. In this example, let’s use the filter to select employees who live in the United States. To do this, first select the field to process – Country. There is an option to apply a function but one is not needed in this example. The operator will be equal, and the Value will be United States. Once the properties are entered, click the Apply button to save.
At the bottom of the page, I can preview the script (matching and not matching records) for the filter processor I just applied and see a preview of the data.
From the filter processor menu, there are a few options for my next step as seen below.
Add matching target will add a target to the data flow for the records that match the Country = United States filter. Add non-matching target will add a target to the data flow for the records that do not match the Country = United States filter. Matching and non-matching processors can also be added. For this example, I will add a matching target and in the properties panel, I will select the space, the extension (.qvd, .parquet, .txt or .csv) and the name of the target file. Like the sources, the target can be a data file or connection. Once I click Apply in the properties panel, I will see a message at the top right indicating that my flow is valid and ready to be run. Running the data flow will grab my Employees dataset, filter the data by country and store the results in a QVD named US Employess.
I now have a data file that has been transformed and prepared for use.
Join Processor
Now, let’s look at how we can join two data inputs into one data output. To do this, two data inputs are required. In the example below, ARSummary and ARSummary-1 are the two data inputs.
In the properties panel of the join processor, the join type is selected and the fields that should be used to link the two tables are selected. You can learn more about joins here. Once the target is added, the data flow can be run, and the result will be a single table with the records from the ARSummary table and the associated records from the ARSummary-1 table.
Unpivot Processor
If you are familiar with scripting, the unpivot processor is like a crosstable load. It allows you to rearrange a table so that column data becomes row data. It can transform a table like this:
To this:
Here is an example data flow with the unpivot processor:
In the properties panel of the unpivot processor, there are only a few settings to update. The first is the unpivot fields. Here is where the fields that we want to unpivot are selected. In this example, we want the year to be stored as row level data so we select them all.
The Attribute field name is the name we want to give to the unpivoted fields – in this case Year. The Value field name is the name of the data that is associated with the fields we are unpivoting – in this examples Sales.
After applying the changes and running the data flow, we will have a table transformed based on our specifications without any code.
In this blog post, we touched upon three of the many processors that can be used in a data flow. Note that a data flow can have many sources, processors and targets – it all depends on your needs. The visual interface of a data flow makes it easy to prepare your data without any code in an appealing design that is easy to follow. Try it out!
Thanks,
Jennell
I am pleased to introduce Qlik Academic Program Educator Ambassador for 2026, Chee-wai, Ho from Republic Polytechnic, Singapore. This is his second term as the Educator Ambassador and we are pleased to have him yet again!
Chee-wai has been actively involved in upskilling adult learners in data literacy for more than five years in Republic Polytechnic’s Specialist Diploma in Business Analytics (SDBA) in Singapore. According to Chee-wai, “Data literacy in practical translates into identifying and correcting data issues, follow by data visualization to make informed business decisions. This is also the foundation for fruitful predictive and prescriptive analytics.”
Today, the Data Alerts feature in Qlik Cloud Analytics retains a history of data that meets the alert condition indefinitely. An upcoming change will introduce an automated purge.
For more information on Data Alerts, see Monitoring data with alerts.
The retention records (seen in Data Alert history) of Data Alert executions will be purged after 90 days or after 10 records (if not within the 90-day window).
The Data Alert feature was not designed to support long-term retention of data sets outside the core Qlik Sense App, but to allow you to detect outliers and anomalies in your data using quick and timely alerts.
In addition, this will align the Data Alerts history with other retention periods on the platform and help eliminate ambiguity.
Unless data alert history is required beyond the set period, no action is needed from your end.
In a use case where you need additional history, use the public APIs to facilitate the availability of alert evaluations and associated data. The Data Alerts REST API can be used to perform any required backup and retention.
For more information, see Data Alerts REST | qlik.dev.
Key end points:
The change will be applied starting Monday, April 20th, 2026. During this week:
If you have any questions, we're happy to assist. Reply to this blog post or take your queries to our Support Chat.
Thank you for choosing Qlik,
Qlik Support
Hello Qlik Talend admins!
We introduced a new Content Delivery Network (CDN) URL to support the Talend Management Console UI. This change was previously communicated in the product and can be found as an item in our release notes from R2026-01.
Should you encounter the error We couldn't load the application when accessing your Talend Management Console, check with your network team to verify that the new CDN was added to any firewall exceptions.
For all the required allowlist URLs, see Adding URLs to your proxy and firewall allowlist.
A relevant support article is available at Qlik Talend Management Console Error loading: We couldn't load the application.
If you have questions about this change, contact your CSM (Customer Success Manager) or the Support team.
Thank you for choosing Qlik,
Qlik Support
In a market racing toward AI-driven outcomes, organizations are discovering a simple truth: AI is only as reliable as the data behind it.
For years, governance has often been treated as a passive function — metadata in a catalog, rules defined in isolation; data issues addressed reactively. That approach doesn’t hold up in an AI-first world.
It’s time for governance to move from passive oversight to active stewardship — embedded, accountable, and operational at the speed of the business.
Analytics only deliver value when insight is trusted. And trust comes from understanding the data behind it; where it came from, how it’s defined, and whether it’s fit for use.
Data Products for Analytics are now available in Qlik Cloud Analytics Premium and Enterprise and Qlik Sense Enterprise.
With this release, you can turn your existing QVDs and datasets into governed, discoverable, and reusable data products, adding data quality, context, and ownership directly within your analytics environment.
What this brings:
Read more in our Innovation blog: Data Products for Analytics Now Available
And once you're ready to get started, here's what you'll need:
Thank you for choosing Qlik,
Qlik Support