Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Forums for Qlik Analytic solutions. Ask questions, join discussions, find solutions, and access documentation and resources.
Forums for Qlik Data Integration solutions. Ask questions, join discussions, find solutions, and access documentation and resources
Qlik Gallery is meant to encourage Qlikkies everywhere to share their progress – from a first Qlik app – to a favorite Qlik app – and everything in-between.
Get started on Qlik Community, find How-To documents, and join general non-product related discussions.
Direct links to other resources within the Qlik ecosystem. We suggest you bookmark this page.
Qlik gives qualified university students, educators, and researchers free Qlik software and resources to prepare students for the data-driven workplace.
The write table was introduced to Qlik Cloud Analytics last month so in this blog post, I will review how it works and how it can be added to an app. The write table looks like the straight table but editable columns can be added to it to update or add data. The updated/added data is visible by other users of the app provided they have the correct permissions. Read more on write table permissions here. Something else to note, if using a touch screen device, is you will have to disable touch screen mode for the write table to work. Looking at the write table for the first time, I found it intuitive and easy to use. Let’s create a write table with some editable columns to see how easy it is.
The write table object can be added to a sheet like any other visualization. Once it is added, columns can be added the same way dimensions and measures are added to a straight table. Below is a small write table with course information including the course ID, course name, instructor and location.
To add an editable column from the properties panel, click on the plus sign (+) and select Editable column.
The new editable column will be added. In the properties for the column, the title for the column can be modified and from the show content drop down, manual user input or single selection can be selected. Manual user input will create a free form column that the user can type into. The single selection option will allow me to create a drop-down list of options that the user can choose from.
I will change the title to Course Level and for show content I will select single selection and add three list items by typing the list item and then clicking on the plus sign to add it to the list. The list items will be displayed in the drop-down in the order they are added but can be rearranged by hovering over the list-item and dragging it to the desired position. List-items can also be deleted by hovering over it and clicking the delete icon that appears to the left.
When you come out of edit mode, the message below will appear for the editable column prompting you to define a set of primary keys.
Once you click Define, you will see the pop-up below where you can select the column(s) that will be used for the unique primary key. This is necessary to save and map the data entered in the editable column to the data model. I will select the CourseID column as the primary key.
Once this is done, I will see the Course Level column with the drop-down of list-items I added.
Let’s add one more editable column that takes manual user unput and name it Notes.
As I add data or update the editable columns, the cells will be flagged orange to indicate that my edits have not been saved. Once I save the table, they will be flagged green and any new values entered are visible to other users. A cell will be blue if another user is currently making changes to the row, thus locking it. Changes are saved for 90 days in a change store (temporary storage location) provided by Qlik. After 90 days, the data will be deleted. It is also important to note that if an editable column is deleted, the data will be lost. This is also the case if the primary key used for the editable column is removed.
It is possible to retrieve the changes from a change store via the change-stores API or an automation. Using the REST connection and the change-store API, the changes made in a write table can be retrieved and stored in a QVD (if needed for more than 90 days) or added to the data model for use in other analytics. Qlik Automate can also be used to retrieve data from the change-store using the List Current Changes From Change Store block or the List Change Store History block. From there the data can be stored permanently in an external system for later use or used in the automation for another process. Qlik Help offers steps for retrieving data from a change-store.
The write table can make it easy for users to add updates, feedback and important information that may not be available in the data model. Not only can this be done quickly, but it can be immediately visible to other colleagues. Learn more about the write table in the Product Innovation blog along with links to videos and write table FAQs.
Thanks,
Jennell
During recent testing, Qlik has identified an issue that can occur after upgrading Qlik Sense on-premise to specific releases. While the upgrade completes successfully, some environments may experience problems with ODBC-based connectors after the upgrade.
The issue is upgrade path dependent and relates to connector components that are included as part of the Qlik Sense client-managed installation.
Recommendation: After upgrading Qlik Sense on-premise, verify your connector functionality as part of your post-upgrade checks, especially when upgrading from earlier Qlik Sense Enterprise on Windows May 2025 releases.
The issue can typically be identified by files being missing after the upgrade. In this example, the Athena connector is not working, and the following file is missing:
C:\Program Files\Common Files\Qlik\Custom Data\QvOdbcConnectorPackage\athena\lib\AthenaODBC_sb64.dll
In this example, all ODBC connectors stopped working:
C:\Program Files\Common Files\Qlik\Custom Data\QvOdbcConnectorPackage\QvxLibrary.dll
With the QvxLibrary.dll missing, both existing and newly created ODBC connections will fail.
A fix will be delivered in upcoming patches. Stay up to date with the most recent version by reviewing our Release Notes.
If your connectors have been impacted by this upgrade, rollback your ODBC connector package to the previously working version based on a pre-update backup. See How to manually upgrade or downgrade the Qlik Sense Enterprise on Windows ODBC Connector Packages for details.
The workaround is intended to be temporary. Apply the fixed Qlik Sense Enterprise on Windows patch for your respective version as soon as it becomes available.
If you have any questions, we're happy to assist. Reply to this blog post or take your queries to our Support Chat.
Thank you for choosing Qlik,
Qlik Support
Hello Qlik Cloud Admins!
As part of our ongoing commitment to provide the best possible experience for Qlik Cloud users, we are removing the Basic and Full User construct from tenants on capacity-based subscriptions, simplifying to just User.
User capabilities on capacity-based tenants are governed by access control. Thus, a difference in user type designation is no longer required to determine what a user can do in the tenant.
Current access control configuration for existing users remains unchanged. You may have to modify the User Default role, assign users to built-in roles, or create new custom roles to support access to tenant features and capabilities. See Roles and permissions for users and administrators for information on your tenant’s access control system.
This is targeted for February 2nd, 2026.
If you have any questions, we're happy to assist. Reply to this blog post or take your queries to our Support Chat.
Thank you for choosing Qlik,
Qlik Support
In December 2025, the Apache Project announced a vulnerability in Apache Tika (CVE-2025-66516) and provided patches to resolve the issue. Qlik has been reviewing our usage of the Apache Tika product suite and has identified a limited impact as follows.
Apache Tika is used in several Qlik products. However, the vulnerability is only relevant to the case of a Talend Studio route that uses Apache Tika to parse PDFs.
No other use case or product is impacted by the vulnerability. Qlik Cloud and Talend Cloud are not impacted by this vulnerability.
Nevertheless, we are patching all our products that contain Apache Tika out of an abundance of caution. Be on the lookout for a series of product patches for supported and affected versions.
The releases listed in the table below contain the updated version of Apache Tika, which addresses CVE-2025-66516.
Always update to the latest version. Before you upgrade, check if a more recent release is available.
| Product | Patch | Release Date |
| Talend Studio | R2025-11v2 | December 16, 2025 |
| Talend Administration Center | QTAC-1472 | December 19, 2025 |
| Talend ESB Runtime | R2025-12-RT | December 19, 2025 |
| Talend Remote Engine Gen 2. | Connectors 1.58.8 | December 23, 2025 |
| Talend Data Stewardship | TPS-6013 | December 23, 2025 |
| Talend Data Preparation | TBD | TBD |
Thank you for choosing Qlik,
Qlik Support
Overview
This SaaS-only feature is an API based solution allowing automated tenant creation and configuration to onboard customers to the Qlik platform immediately as they purchase your host application or solution. For Enterprise users, you’ll be able to support more structured Dev, Test, Acceptance, Production (DTAP) use cases. Additionally, it provides tenant separation between internal corporate use cases and external (extranet) use cases.
Getting Started
To move forward with this functionality, contact your Account Manager. Once the license is enabled, you’ll be able to access it from My Qlik Portal.
Configuration Requirements
Limitations
Additional Resources
Thank you for choosing Qlik!
Qlik Global Support
Qlik's O365 Add-in offering for report developers has expanded with two new add-ins to enable Word document and PowerPoint presentation analytic reports.
Report developers can now:
Qlik add-ins for Microsoft Office are installed using a manifest file. If you are using an existing manifest, you will need to download and deploy an updated file to access the new add-ins. See the deployment guide Deploying and installing Qlik add-ins for Microsoft Office.
The manifest covers all three productivity tool Add-ins. They cannot be deployed individually.
Qlik’s integration testing of Microsoft PowerPoint APIs shows that the O365 Add-in for PowerPoint can be unstable or slow at times. Our investigation with Microsoft reveals this to be a known challenge; some APIs on web vs desktop can result in different behaviors.
If your report developers have difficulty with the online PowerPoint Add-in, contact Qlik Support to open a case with us.
While we investigate the integration with Microsoft to determine if a solution is possible, consider developing your reports with the desktop version of PowerPoint.
If you have any questions, we're happy to assist. Reply to this blog post or take your queries to our Support Chat.
Thank you for choosing Qlik,
Qlik Support
At Qlik, we deeply understand the importance of practicing data analytics—not just learning it theoretically. Today’s students need hands-on experience with modern analytics tools, exposure to real datasets, and an understanding of how data supports decision-making in real organizations. This is exactly where the Qlik Academic Program aims to make a difference.
Through the program, educators and students gain free access to Qlik’s end-to-end analytics platform, including tools for data visualization and analytics. Beyond software access, we also provide a wide range of teaching and learning resources, such as ready-to-use datasets, academic licenses, training materials, on-demand learning paths, tutorials, sample apps, and use-case-driven exercises that can be easily embedded into existing curricula.
We also support students in developing industry-recognized Qlik qualifications, helping them demonstrate practical analytics skills that are highly valued in today’s job market. For educators, we offer ongoing enablement, guest lectures, workshops, and direct support to ensure Qlik is effectively integrated into modules across disciplines from business analytics and data science to supply chain, finance, and beyond.
As we move forward in 2026, our focus remains on collaboration, accessibility, and real-world relevance. We’re excited to continue working closely with our academic partners across EMEA and to support the next generation of data-driven professionals—starting strong, and staying strong, together.
If you’d like to discover and learn more about the Qlik Academic Program, you can visit our page at www.qlik.com/academicprogram
Hi everyone,
For various and valid reasons, you might need to migrate your entire Qlik Sense environment, or part of it, somewhere else.
In this post, I’ll cover the most common scenario: a complete migration of a single or multi-node Qlik Sense system, with the bundled PostgreSQL database (Qlik Sense Repository Database service) in a new environment.
So, how do we do that?
If direct assistance is needed and you require hands-on help with a migration, engage Qlik Consulting. Qlik Support cannot provide walk-through assistance with server migrations outside of a post-installation and migration completion break/fix scenario.
Let’s start with a little bit of context: Say that we are running a 3 nodes Qlik Sense environment (Central node / Proxy-Engine node / Scheduler node).
On the central node, I also have the Qlik shared folder and the bundled Qlik Sense Repository Database installed.
If you have previously unbundled your PostgreSQL install, see How To migrate a Qlik Sense Enterprise on Windows environment to a different host after unbundling PostgreSQL for instructions on how to migrate.
This environment has been running well for years but I now need to move it to a brand new hardware ensuring better performance. It’s not possible to reinstall everything from scratch because the system has been heavily used and customized already. Redoing all of that to replicate the environment is too difficult and time-consuming.
I start off with going through a checklist to verify if the new system I’m migrating to is up to it:
And then I move right over to…
The first step to migrate your environment in this scenario is to back it up.
To do that, I would recommend following the steps documented on help.qlik.com (make sure to select your Qlik Sense version top left of the screen).
Once the backup is done you should have:
Then we can go ahead and…
The next steps are to deploy and restore your central node. In this scenario, we will also assume that the new central node will have a different name than the original one (just to make things a bit more complicated 😊).
Let’s start by installing Qlik Sense on the central node. That’s as straightforward as any other fresh install.
You can follow our documentation. Before clicking on Install simply uncheck the box “Start the Qlik Sense services when the setup is complete.”
The version of Qlik Sense you are going to install MUST be the same as the one the backup is taken on.
Now that Qlik Sense is deployed you can restore the backup you have taken earlier into your new Qlik Sense central node following Restoring a Qlik Sense site.
Since the central node server name has also changed, you need to run a Bootstrap command to update Qlik Sense with the new server name. Instruction are provided in Restoring a Qlik Sense site to a machine with a different hostname.
The central node is now almost ready to start.
If you have changed the Qlik Share location, then the UNC path has also changed and needs to be updated.
To do that:
At this point make sure you can access the Qlik Sense QMC and Hub on the central node. Eventually, check that you can load applications (using the central node engine of course). You can also check in the QMC > Service Cluster that the changes you previously made have been correctly applied.
Troubleshooting tips: If after starting the Qlik Sense services, you cannot access the QMC and/or Hub please check the following knowledge article How to troubleshoot issue to access QMC and HUB
You’ve made it here?! Then congratulations you have passed the most difficult part.
If you had already run and configured rim nodes in your environment that you now need to migrate as well, you might not want to remove them from Qlik Sense to add the new ones since you will lose pretty much all the configuration you have done so far on these rim nodes.
By applying the following few steps I will show you how to connect to your “new” rim node(s) and keep the configuration of the “old” one(s).
Let’s start by installing Qlik Sense on each rim node like it was a new one.
The process is pretty much the same as installing a central node except that instead of choosing “Create Cluster”, you need to select “Join Cluster”
Detailed instructions can be found on help.qlik.com: Installing Qlik Sense in a multi-node site
Once Qlik Sense is installed on your future rim node(s) and the services are started, we will need to connect to the “new” Qlik Sense Repository Database and change the hostname of the “old” rim node(s) to the “new” one so that the central node can communicate with it.
To do that install PGAdmin4 and connect to the Qlik Sense Repository Database. Detailed instruction in Installing and Configuring PGAdmin 4 to access the PostgreSQL database used by Qlik Sense or NPrinting knowledge article.
Once connected navigate to Databases > QSR > Schemas > public > Tables
You need to edit the LocalConfigs and ServerNodeConfigurations table and change the Hostname of your rim node(s) from the old one to the new corresponding one (Don’t forget to Save the change)
LocalConfigs table
ServerNodeConfigurations table
Once this is done, you will need to restart all the services on the central node.
When you have access back, login to the QMC and go to Nodes. Your rim node(s) should display the following status, “The certificate has not been installed”
From this point, you can simply select the node, click on Redistribute and follow the instruction to deploy the certificates on your rim node. After a moment the status should change and you should see the services being up and running.
Do the same thing on the remaining rim node(s).
Troubleshooting tips: If the rim node status is not showing “The certificate has not been installed” it means that either the central node cannot reach the rim node or the rim node is not ready to receive new certificates.
Check that the port 4444 is opened between the central and rim node and make sure the rim node is listening on port 4444 (netstat -aon in command prompt).
Still no luck? You can completely uninstall Qlik Sense on the rim node and reinstall it.
At this point, your environment is completely migrated and most of the stuff should work.
There is one thing to consider in this scenario. Since the Qlik Sense certificates between the old environment and the new one are not the same, it is likely that data connections with passwords will fail. This is because passwords are saved in the repository database with encryption. That encryption is based on a hash from the certs. When the Qlik Sense self-signed cert is rebuilt, this hash is no longer valid, and so the saved data connection passwords will fail. You will need to re-enter the passwords in each data connection and save. This can be done in the QMC -> Data Connections.
See knowledge article: Repository System Log Shows Error "Not possible to decrypt encrypted string in database"
Do not forget to turn off your old Qlik Sense Environment once you are finished. While Qlik's Signed License key can be used across multiple environments, you will want to prevent accidental user assignments from the old environment.
Note: If you are still using a legacy key (tokens), the old environment must be shut down immediately, as you can only use a legacy license on one active Qlik Sense environment. Reach out to your account manager for more details.
Finally, don’t forget to apply best practices in your new environment:
Data is at the center of the AI revolution. But as Bernard Marr explains in his Forbes article The 8 Data Trends That Will Define 2026 the biggest changes are not just technical; they are changing how people work, learn, and build careers.
These 2026 data trends are already reshaping education and jobs.
AI agents and agent-ready data are changing how work gets done, making it essential to understand how data is structured, accessed, and secured.
Generative AI for data engineering is automating technical tasks, shifting skills toward design, logic, and critical thinking.
Data provenance and trust are becoming core requirements as data volumes grow, and decisions rely more on AI.
Compliance and regulation are expanding globally, making responsible data use a necessary skill across roles.
Generative data democracy allows more people to access insights, increasing the importance of data literacy for everyone.
Synthetic data is opening new opportunities while raising ethical and privacy considerations.
Data sovereignty is shaping how organizations manage data across borders and jurisdictions.
Together, these trends show why data literacy is becoming a universal skill for education and careers in 2026.
The Qlik Academic Program helps academic communities respond to these changes by putting data literacy at the center of learning. Students develop the ability to read, question, and explain data while working hands-on with real analytics tools to explore data, build insights, and understand how AI-driven decisions are made. Professors are supported with training and teaching resources that make it easier to embed data literacy and modern data topics across disciplines.
As the Forbes article makes clear, the future belongs to those who can work confidently with data, alongside AI, within regulations, and with trust.
By giving students, professors, and universities free access to analytics software, learning content, and certifications, the Qlik Academic Program helps education stay aligned with the data trends shaping 2026 and prepares learners for the jobs of tomorrow.
Join our global community for free: Qlik Academic Program: Creating a Data-Literate World
Don't miss our previous Q&A with Qlik! Hear from our panel of experts to help you get the most out of your Qlik experience.
Let our Qlik experts offer solutions to common issues encountered when upgrading Qlik Sense, best practices, and important configuration settings.
Hi Qlik Community,
We hope everyone had a wonderful holiday season and a great start to the new year! As we settle into 2026, we wanted to take a moment to reflect on what we’ve been working on over the past couple of months and share a few important updates, improvements, and upcoming opportunities with you.
Here’s a look at what’s new across the Qlik Community!
New Data Integration & Quality Forum: Qlik Open Lakehouse
We’re excited to share that we’ve launched a brand-new Qlik Open Lakehouse forum!
This space is dedicated to discussions around Open Lakehouse architectures, Apache Iceberg, and how Qlik supports modern, flexible data ecosystems. Whether you’re exploring open formats, optimizing performance, or thinking about governance at scale, this forum is designed to support those conversations.
We encourage you to subscribe and start asking questions and sharing insights!
Forum Updates
We’ve recently merged the Data Integration Component Development forum into the Talend Studio forum to better align related discussions across the Community. This change helps ensure questions, knowledge, and expertise live in a single, centralized space.
A dedicated label is available in the Talend Studio forum to clearly identify posts that originated from the Data Integration Component Development forum.
Homepage Carousel Improvements
You may have noticed a change to the Community homepage carousel. We’ve slowed down the rotation to give you more time to absorb each message, and we’ve added a pause button so you can control the experience yourself. This update reflects our continued commitment to making Community content easier to explore and more enjoyable to use.
Recent Fixes & Improvements
We’ve also resolved a few behind-the-scenes issues:
Thank you to everyone who flagged these gaps!
Scavenger Hunt Recap
Thank you to everyone who participated in our Qlik Community Scavenger Hunt!
We saw fantastic engagement across the Community, with over 100 submissions during the hunt. Five winners were selected and have received exclusive Qlik swag as a thank-you for completing the challenge.
If you missed it or want to see how it played out, you can check out the full recap post here!
Keep an eye out, we hope to bring back more interactive Community activities like this in the future.
Just in Case You Missed It -
Before we wrap up, here are a few things worth checking out:
Qlik Connect Session Catalog
Have you explored the Qlik Connect session catalog yet? Discover 100+ sessions, workshops, certifications, and networking opportunities, check it out!
Trends 2026 Outlook – January 14
Join us for the highly anticipated Trends 2026 Outlook with Dan Sommer, Market Intelligence Lead at Qlik. He’ll reveal the trends underpinning a new framework for powering AI that ensures integrity, connects every system seamlessly, and fuels innovation at the edge, and map out how to get there. Registration is now open, and the session will be available on January 14.
Q&A with Qlik
Looking to better understand how to get started finding data insights with Qlik Cloud? Our Q&A with Qlik sessions give you the chance to connect directly with Qlik experts who can help guide you through the basics and beyond. Register here!
That’s all for now! Thank you, as always, for being an active and engaged part of the Qlik Community. We’re looking forward to an exciting year ahead and can’t wait to share more with you soon.
Your Qlik Community Managers,
Sue, Jamie, Caleb, and Brett
Hello Qlik Talend admins,
Qlik is updating the Qlik Talend Nexus repository. The changes are rolled out in a phased approach. Phase One was completed on July 16th, 2025.
Phase Two is scheduled for January 26th, 2026.
The impact is minimal.
Qlik Talend Studio:
Qlik Talend Administration Center
If you have any questions, we're happy to assist. Reply to this blog post or start a chat with us.
Thank you for choosing Qlik,
Qlik Support
Qlik and Qlik Cloud are always innovating, adding new features to make the user experience even better. Today I would like to tell you about Qlik’s newest feature: Templates. Templates are a new feature in Qlik Cloud that prompts the user when creating a new sheet.
To use Templates, go into any Qlik Cloud app and click on ‘sheets’ then ‘Create new sheet’.
There you will be greeted with the new Templates feature. Please know that if you do not wish to see this screen when you are creating a new sheet, you can simply uncheck the box next to ‘Show when creating a sheet’.
The Templates feature is broken down into a few different categories to help navigate the feature. The vast number of Templates available can seem a bit overwhelming, but if you find a template that you find yourself using often, you can click on the star next to it to add it to your ‘Favorites’.
Additionally, if you would like the freedom to create your own sheet, without a template, you can simply select the ‘Empty sheet’ option.
Using a template is easy!
Let’s take a look at one of the templates from the Highlights section ‘Charts with filters on the side’. With just the click of a button, my new sheet with the various placeholders for my charts has been created.
From here I can begin creating my sheet. As we can see, charts have been added to the sheet, including a Straight table at the bottom, KPIs at the top, and Bar charts in the middle with our Filter Panes to the side. Of course, I have the freedom to add, delete and change these charts as I see fit.
Then we add a bit of color and add a bit of spacing.
And we have a finished sheet! Of course there is still so much more we could do with this sheet to customize it to our needs, but it’s a start!
There are so many templates that you can use to help create your sheets. Take a look at new Template feature and drop which template you think will be most useful. Thank you for reading!
Not sure where to begin your Qlik journey? The Qlik Skills Assessment is a free, easy-to-use tool that helps you quickly evaluate where you are on your Qlik learning journey. Once you complete the assessment, you’ll receive training recommendations designed to strengthen and expand your skills.
We’ve expanded our assessments to include Qlik Data Analytics and Data Integration, with 11 Skills Assessments now available to you.
Why take a Skills Assessment?
How do I take a Skills Assessment?
Track your progress
Take your Qlik Skills Assessment today to understand where you are on your learning journey—and get the guidance you need to build and expand your Qlik expertise with confidence. 🚀
If you have been building custom web applications or mashups with Qlik Cloud, you have likely hit the "10K cells ceiling" when using Hypercubes to fetch data from Qlik.
(Read my previous posts about Hypercubes here and here)
You build a data-driven component, it works perfectly with low-volume test data, and then you connect it to production; and now suddenly, your list of 50,000+ customers cuts off halfway, or your export results look incomplete.
This happens because the Qlik Engine imposes a strict limit on data retrieval: a maximum of 10,000 cells per request. If you fetch 4 columns, you only get 2,500 rows (4 (columns) x 2500 = 10,000 (max cells)).
In this post, I’ll show you how to master high-volume data retrieval using the two strategies: Bulk Ingest and On-Demand Paging, using the @qlik/api library.
The Qlik Associative Engine is built for speed and can handle billions of rows in memory. However, transferring that much data to a web browser in one go would be inefficient. To protect both the server and the client-side experience, Qlik forces you to retrieve data in chunks.
Understanding how to manage these chunks is the difference between an app that lags and one that delivers a good user experience.
To see these strategies in action, we need a "heavy" dataset. Copy this script into your Qlik Sense Data Load Editor to generate 250,000 rows of transactions (or download the QVF attached to this post):
// ============================================================
// DATASET GENERATOR: 250,000 rows (~1,000,000 cells)
// ============================================================
Transactions:
Load
RecNo() as TransactionID,
'Customer ' & Ceil(Rand() * 20000) as Customer,
Pick(Ceil(Rand() * 5),
'Corporate',
'Consumer',
'Small Business',
'Home Office',
'Enterprise'
) as Segment,
Money(Rand() * 1000, '$#,##0.00') as Sales,
Date(Today() - Rand() * 365) as [Transaction Date]
AutoGenerate 250000;
There are two primary ways to handle this volume in a web app. The choice depends entirely on your specific use case.
In this pattern, you fetch the entire dataset into the application's local memory in iterative chunks upon loading.
The Goal: Provide a "zero-latency" experience once the data is loaded.
Best For: Use cases where users need to perform instant client-side searches, complex local sorting, or full-dataset CSV exports without waiting for the Engine.
In this pattern, you only fetch the specific slice of data the user is currently looking at.
The Goal: Provide a near-instant initial load time, regardless of whether the dataset has 10,000 or 10,000,000 rows as you only load a specific chunk of those rows at a time.
Best For: Massive datasets where the "cost" of loading everything into memory is too high, or when users only need to browse a few pages at a time.
While I'm using React and custom react hooks for the example I'm providing, these core Qlik concepts translate to any JavaScript framework (Vue, Angular, or Vanilla JS). The secret lies in how you interact with the HyperCube.
The Iterative Logic (Bulk Ingest):
The key is to use a loop that updates your local data buffer as chunks arrive.
To prevent the browser from freezing during this heavy network activity, we use setTimeout to allow the UI to paint the progress bar.
qModel = await app.createSessionObject({ qInfo: { qType: 'bulk' }, ...properties });
const layout = await qModel.getLayout();
const totalRows = layout.qHyperCube.qSize.qcy;
const pageSize = properties.qHyperCubeDef.qInitialDataFetch[0].qHeight;
const width = properties.qHyperCubeDef.qInitialDataFetch[0].qWidth;
const totalPages = Math.ceil(totalRows / pageSize);
let accumulator = [];
for (let i = 0; i < totalPages; i++) {
if (!mountedRef.current || stopRequestedRef.current) break;
const pages = await qModel.getHyperCubeData('/qHyperCubeDef', [{
qTop: i * pageSize,
qLeft: 0,
qWidth: width,
qHeight: pageSize
}]);
accumulator = accumulator.concat(pages[0].qMatrix);
// Update state incrementally
setData([...accumulator]);
setProgress(Math.round(((i + 1) / totalPages) * 100));
// Yield thread to prevent UI locking
await new Promise(r => setTimeout(r, 1));
The Slicing Logic (On-Demand)
In this mode, the application logic simply calculates the qTop coordinate based on the user's current page index and makes a single request for that specific window of data (rowsPerPage).
const width = properties.qHyperCubeDef.qInitialDataFetch[0].qWidth;
const qTop = (page - 1) * rowsPerPage;
const pages = await qModelRef.current.getHyperCubeData('/qHyperCubeDef', [{
qTop,
qLeft: 0,
qWidth: width,
qHeight: rowsPerPage
}]);
if (mountedRef.current) {
setData(pages[0].qMatrix);
}
I placed these two methods in custom hooks (useQlikBulkIngest & useQlikOnDemand) so they can be easily re-used in different components as well as other apps.
Regardless of which pattern you choose, always follow these three Qlik Engine best practices:
Engine Hygiene (Cleanup): Always call app.destroySessionObject(qModel.id) when your component or view unmounts.
Cell Math: Always make sure your qWidth x qHeight is strictly < 10,000. For instance, if you have a wide table (20 columns), your max height is only 500 rows per chunk.
UI Performance: Even if you use the "Bulk" method and have 250,000 rows in JavaScript memory, do not render them all to the DOM at once. Use UI-level pagination or virtual scrolling to keep the browser responsive.
Choosing between Bulk and On-Demand is a trade-off between Initial Load Time and Interactive Speed. By mastering iterative fetching with the @qlik/api library, you can ensure your web apps remain robust, no matter how much data is coming in from Qlik.
💾 Attached is the QVF and here is the GitHub repository containing the full example in React so you can try it in locally - Instructions are provided in the README file.
(P.S: Make sure you create the OAuth client in your tenant and fill in the qlik-config.js file in the project with your tenant-specific config).
Thank you for reading!
Dear Qlik Replicate customers,
Salesforce announced (October 31st, 2025) that it is postponing the deprecation of the Use Any API Client user permission. See Deprecating "Use Any API Client" User Permission for details.
Qlik will keep the OAUT plans on the roadmap to deliver them in time with Salesforce's updated plans.
Salesforce has announced the deprecation of the Use Any API Client user permission. For details, see Deprecating "Use Any API Client" User Permission | help.salesforce.com.
We understand that this is a security-related change, and Qlik is actively addressing it by developing Qlik Replicate support for OAuth Authentication. This work is a top priority for our team at present.
If you are affected by this change and have activated access policies relying on this permission, we recommend reaching out to Salesforce to request an extension. We are aware that some customers have successfully obtained an additional month of access.
By the end of this extension period, we expect to have an alternative solution in place using OAuth.
Customers using the Qlik Replicate tool to read data from the Salesforce source should be aware of this change.
Thank you for your understanding and cooperation as we work to ensure a smooth transition.
If you have any questions, we're happy to assist. Reply to this blog post or take your queries to our Support Chat.
Thank you for choosing Qlik,
Qlik Support
Key Highlights
The app-settings area has been refurbished with a more modern layout and tabbed navigation between categories of settings.
Why this matters:
Several usability upgrades have landed in the sheet-edit experience:
The new straight table object has graduated and is now included under the chart section as the default table visual. The older table object remains accessible in the asset panel for now and its eventual deprecation will be announced well in advance.
Why this matters:
Some powerful visual upgrades arrived in this release:
With this release, we are updating our announcement from our Qlik Sense May 2025 release regarding the roadmap for removing deprecated visualization objects. The following deprecated charts are now scheduled to be removed from the Qlik Analytics distribution in May 2027.
In Closing
The November 2025 release for Qlik Sense Enterprise on Windows delivers meaningful improvements across usability, visualizations, and stability. Whether you're an analytics author, a business user, or an administrator/architect, there are advantages to be gained with this release.
As always, we recommend reviewing the full “What’s New” documentation and aligning your upgrade and adoption strategy accordingly.
AI を活用して投資利益率を高める 2026年のトレンド
多くの企業が AI に投資しているにもかかわらず、投資利益率を高めている企業はごく少数です。何を改善すべきなのか?
何十年もの間、企業は振り子のように揺れ動いてきました。前進している時は自由度を高め、後退している時は規律を強める…を繰り返してきました。2026年のデータで成功する戦略モデルは、二者択一ではありません。管理とイノベーションを両立して活かし、新たな価値を生み出すことが重要になります。
1月 27日(火)開催 Web セミナー「AI の未来を創る:データ・エージェント・人間のタッグが生む新たな価値」では、Qlik のマーケットインテリジェンスリードの Dan Sommer と Qlik APAC の分析・AI 部門 最高技責任者の Charlie Farah が、2026年の重要なトレンドについて解説します。
本 Web セミナーでは、ビジネスを成功に導くために押さえるべき 3 つの重要なポイントをご紹介します。このポイントをビジネスに適用すると、データの整合性を確保してすべてのシステムをシームレスにつなぎ、ビジネスに革新を起こすことができます。さらに、この新たなモデルの基礎となるトレンドを探ることで、貴社のデータ戦略をレベルアップする方法も解説します。
偏った方針に振り回されることなく、分断を解消して統合基盤を構築するには?Web セミナーに参加して、AI を最大限に活用するために、新たなモデルの導入の重要性をご確認ください。
V1 milestone reinforces Dynamic Engine as the most versatile, cloud-agnostic execution runtime for enterprise data integration and API workloads.
Different industries have distinct cloud preferences, driven by existing vendor relationships, regulatory requirements, and regional data residency mandates:
With Dynamic Engine, you can now deploy a unified data integration runtime across all these environments, using the same orchestration, monitoring, and management tools via Talend Management Console (TMC).
Dynamic Engine on Google Kubernetes Engine (GKE) brings the same enterprise-grade capabilities customers already enjoy on AWS and Azure, now optimized for Google Cloud infrastructure:
|
Kubernetes versions |
Compatible Dynamic Engine versions |
|
1.30 |
0.23, 0.24, 1.0 |
|
1.31 |
0.23, 0.24, 1.0 |
|
1.32 |
0.23, 0.24, 1.0 |
|
1.33 |
0.24, 1.0 |
|
1.34 |
0.24, 1.0 |
For detailed setup instructions, see our official guide: Configuring Google Kubernetes Engine.
One of Dynamic Engine's standout features is its native Helm support, which dramatically simplifies deployment while offering deep customization for enterprise DevOps teams.
Helm transforms Dynamic Engine deployment into a repeatable, version-controlled, GitOps-ready process:
Helm charts for Dynamic Engine are publicly available and include:
For complete guidance, see: Recommended Helm Deployment.
For organizations operating in highly secure, air-gapped, or disconnected environments (common in defense, government, financial services, and healthcare), Dynamic Engine offers full support for:
This makes Dynamic Engine one of the few enterprise data integration platforms that can operate in zero-trust, fully isolated network environments.
Dynamic Engine goes far beyond basic deployment, offering a rich set of enterprise customization capabilities:
For advanced configurations, explore: Additional Customization with DevSecOps.
One of Dynamic Engine's most compelling features is its built-in, no-downtime upgrade mechanism.
Learn more: Upgrading Dynamic Engine Version.
With Google Kubernetes Engine support, Helm-based deployment, air-gap readiness, and zero-downtime upgrades, Dynamic Engine delivers:
Dynamic Engine isn't just a runtime - it's a future-proof platform for data integration at enterprise scale, across any cloud, any Kubernetes distribution, and any regulatory environment.
Ready to deploy Dynamic Engine on GKE, or take your existing AWS/Azure deployments to the next level with Helm customization? Reach out to your Qlik Talend account team or explore our documentation to get started today.