Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Update 4/20/2026: Demo video added
This article will walk you through how to create a declarative agent for Copilot that uses an MCP plugin to connect MSFT 365 Copilot with Qlik’s MCP Server. Please note, there are several prerequisites that you must meet to successfully execute the steps in this guide.
Note: at the time of writing, using plugins to connect to an MCP Server is in public preview.
Once you complete these steps, Agents Toolkit generates the required files for the agent and opens a new Visual Studio Code window with the agent project loaded.
http://127.0.0.1:33418, http://127.0.0.1:33418/, and https://vscode.dev/redirect are redirect URLs for VS Code used for development and testing. https://teams.microsoft.com/api/platform/v1.0/oAuthRedirect will be the redirect URL when the plugin is provisioned and deployed.
Once you are ready to publish (share your agent with others), follow these steps (please see the reference link below for more detailed information from Microsoft).
Please reference Microsoft’s official guidance for more information on Publishing agents for Microsoft 365 Copilot.
A calendar is very useful when you want to link your data to different time periods, e.g. when you want to display your KPIs over different years or months. Often you only have one date and you just want to use a standard calendar. For this case, there are plenty of resources on this community. You can find a good overview on How to use - Master-Calendar and Date-Values..
If you have several date fields, you should most likely have several calendars defined in your script. See
Why You sometimes should Load a Master Table several times
If you want to use a non standard calendar, like a fiscal calendar or a 4-4-5 calendar, the challenge becomes more difficult. See e.g. Fiscal Yearor Recipe for a 4-4-5 Calendar.
The script posted here will help you create a more complicated calendar. It has parametrized examples for the following calendars:
There are several parameters that you can use to configure your calendar: The first month of the year, the first day of the week and the first week of the week-based year.
The script is commented, so hopefully you can read it and understand it. You can paste it straight into an empty app and run it to evaluate how it works. If you want to use parts of it inside one of your own apps, you may need to rename some fields.
The script should work in both Qlik Sense and QlikView.
Good Luck!
HIC
This article assumes you understand:
* Modifying the Qlik NPrinting Repository will void the Support Agreement! Changes to the database should be performed by the Qlik NPrinting services only or through the use of supported API calls.
Overview
If, for some reason, you lost the Administrator role for your Qlik NPrinting Admin account and you are unable to change settings in the Web Console, you can follow this article to restore administrative access to the target account.
Symptoms
You are unable to edit/change any Administrative settings.
Resolution
You can force insert/update the user in the database with the administrator role id.
The first step is to connect to the database using pgAdmin.
The second step is to identify the administrator role id using pgAdmin.
In my test environment, the administrator role equals "6b800774-db4f-4b5e-a975-24e5b86b5ece".
The third step is to identify the user role ID using pgAdmin.
In my test environment, the administrator role equals "1628ad97-75af-4cb3-897f-da1cb07777e5".
The fourth step is to update the "role_recipient" table.
Use the following command line, replacing the information you previously retrieved.
Insert into public.role_recipient VALUES ('user role id','administrator role id');
In my scenario, it will be:
Insert into public.role_recipient VALUES ('1628ad97-75af-4cb3-897f-da1cb07777e5','6b800774-db4f-4b5e-a975-24e5b86b5ece');
Click "Execute script" or press F5.
Go back to Web Console and refresh the page. The target user should have Admin privileges.
Environment
The word Scrum comes from rugby. The tight formation where players pack together, heads down, working as one unit to move the ball forward. The image fits. In a Scrum team we also move as one unit, relying on communication, transparency, and trust in each other's capabilities to reach a shared goal.
Qlik delivery is a good home for that mindset. Migrations, new applications, reload optimization, data modeling. The work is complex by nature. Requirements shift the moment users see their data, source systems behave unpredictably, and the right solution is rarely the one sketched on day one. Waterfall: discovery. Design. Develop. Test. Looks tidy on paper, but it breaks when the data does not match the design document, or when a client realizes mid-project that the dashboard they asked for is not the one they actually need.
Scrum holds up because it rests on three pillars: transparency, inspection, and adaptation. Transparency means the work is visible. Jira (or your preferred Project Management Tool) is the single source of truth, and if a deliverable is not there, it does not exist. Inspection happens in the events: daily scrums, sprint reviews, retrospectives. Adaptation is what makes inspection worth anything. When a gateway server goes down or a client reshapes their reporting needs, the backlog shifts, and the sprint keeps moving.
The team itself is small by design. Ten people or fewer, three roles, no hierarchies. The Product Owner maximizes the value of the product and owns the backlog deciding which reports and data sources matter most. The Developers - Qlik developers, data engineers, and UAT support create a usable increment each sprint and hold themselves accountable, as a team, for what ships. The Scrum Master establishes the framework and removes the blockers: access requests, gateway issues, client dependencies that stall the work.
Everything is time-boxed. Sprint planning commits the team to specific apps, QVDs, or migration tasks. The daily scrum is fifteen minutes; progress, blockers, plan for the day; a missing credential surfaces here, not a week later. The sprint review is where working increments get demoed to stakeholders, and the feedback reshapes what comes next. The retrospective, for the team only, is where the process itself improves. And an item is only done when it meets the Definition of Done; for Qlik, that means the script runs cleanly on schedule, UAT passes, documentation is updated, and the reload is in the operations reference. Anything less goes back to the backlog.
The framework only works if the team lives the five values: commitment, focus, openness, respect, and courage. The courage to say an item is not done, to ask the question that feels obvious, to push back when the plan no longer fits the reality.
There is a story in Scrum: The Art of Doing Twice the Work in Half the Time about Toyota's assembly line: any worker can stop the whole line when a problem appears, and the team swarms to fix it on the spot. One defect, solved once, forever. The alternative is hundreds of cars shipped with the same flaw. The principle applies to our work too. A defect caught inside the sprint costs far less than one caught in production, and doing it right the first time is almost always faster than fixing it later.
Partial Scrum is not Scrum. Skipping the retrospective, letting the Product Owner drift from the backlog, stretching the daily past fifteen minutes. These turn the framework into ceremony. Either keep all the parts, or call it something else. As Schwaber and Sutherland write in the 2020 Scrum Guide: "Scrum exists only in its entirety and functions well as a container for other techniques, methodologies, and practices."
For Qlik delivery, that entirety is what gives us a way to plan without overcommitting to the plan. We commit to the sprint, we inspect the results, we adapt. The product gets closer to what the client needs with every cycle and not further away.
Further reading: The Scrum Guide (Schwaber & Sutherland) · Scrum: The Art of Doing Twice the Work in Half the Time (Sutherland).
Slow dashboards frustrate users and erode trust in your data. Whether you are building on Qlik Sense or Qlik Cloud, performance issues rarely come from the platform itself, they almost always come from design decisions made during development. The good news is that most of them are preventable.
This article walks through five foundational best practices that will help you keep your Qlik applications fast, scalable, and responsive.
1. Limit the number of visualizations per sheet
Every chart, table, and KPI tile on a sheet consumes engine resources the moment the sheet is opened. Qlik calculates all visible objects simultaneously, which means a sheet with twenty charts will always be slower than one with five, even if the underlying data is the same.
A practical rule of thumb is to keep each sheet focused on a single analytical question. If you find yourself adding more and more charts to answer related questions, that is usually a sign that the content belongs on a separate sheet or in a drill-down structure. Master Visualizations and Container objects can also help you organize content without multiplying the engine workload.
Beyond raw performance, fewer visualizations per sheet also improve the user experience. A focused layout guides the reader's attention and reduces cognitive overload.
2. Avoid heavy calculations in the UI ‐ push logic to the script
Qlik offers two places where you can perform calculations: the load script (executed once at reload time) and the expression layer (executed every time a user makes a selection). Expressions that are complex, nested, or operate over large datasets can noticeably slow down chart rendering, especially in multi-user environments.
The general principle is simple: anything that does not need to react to user selections should be calculated in the script. Pre-aggregated fields, derived flags, date parts, and category labels are all good candidates for script-level computation. By the time the data reaches the engine, those values are already stored as simple fields, no runtime cost.
Reserve the expression layer for calculations that genuinely depend on the current selection context, such as dynamic comparisons, percentage-of-total measures, or period-over-period variance. If an expression starts to look like a nested IF inside an Aggr() inside a Sum(), that is a strong signal to move some of that logic upstream.
3. Use Set Analysis instead of complex IF conditions
A common pattern among developers new to Qlik is writing IF conditions inside aggregation expressions to filter data - for example,
Sum(If(Region = 'North', Sales))While this works, it forces the engine to evaluate the condition row by row across the entire dataset before aggregating. On large tables, this becomes expensive very quickly.
Set Analysis is the correct tool for this job. An equivalent expression
Sum({<Region={'North'}>} Sales) Tells the engine exactly which records to include before it begins aggregating, making it significantly faster. Set Analysis also integrates naturally with Qlik's associative selection model, which IF-based filters do not always respect correctly.
Beyond performance, Set Analysis expressions tend to be easier to read and maintain once you are familiar with the syntax. They also unlock more advanced scenarios, such as comparing the current selection against a fixed reference period, without resorting to complex conditional logic.
4. Avoid high-cardinality fields in charts
Cardinality refers to the number of distinct values a field contains. A field like CustomerID or TransactionTimestamp can have millions of unique values, these are high-cardinality fields. Using them as dimensions in charts causes the engine to generate one data point per distinct value, which is both slow to calculate and impossible to interpret visually.
The solution is to aggregate or bucket these fields before they reach the chart layer. Timestamps should be broken into meaningful time parts ‐ year, quarter, month, week, or day - depending on the granularity your analysis requires. This is best done in the load script using functions like Year(), Month(), or Floor() to create derived date fields. For numeric identifiers, consider whether the analysis actually needs individual-level detail or whether a grouped summary is more appropriate.
As a general guideline, chart dimensions should have a manageable number of distinct values ‐ typically no more than a few hundred at most. If a dimension naturally has more, consider adding a filter pane so users can narrow the scope before the chart renders.
5. Use Aggr() wisely ‐ only when truly needed
Aggr() is one of Qlik's most powerful functions. It creates a virtual table of aggregated values grouped by one or more dimensions, which then serves as the input for an outer aggregation. This enables calculations that would otherwise be impossible in the expression layer, such as "average order value per customer."
However, Aggr() is also one of the most resource-intensive functions available. Every time it is evaluated, the engine constructs an in-memory virtual table, which adds significant overhead ‐ especially when the inner dimension has high cardinality or when Aggr() is nested or used across multiple charts on the same sheet.
Before reaching for Aggr(), ask whether the same result can be achieved in the load script. If you need the average order value per customer, pre-calculating it as a script-level field is almost always faster than computing it at runtime with Aggr(). Reserve Aggr() for scenarios where the result genuinely needs to respond to user selections in a way that cannot be pre-computed, for example, ranked comparisons or dynamic top-N calculations.
When Aggr() is the right tool, keep the inner dimension as low-cardinality as possible and avoid nesting multiple Aggr() calls inside each other.
Summary
Performance optimization in Qlik is largely about knowing where computation belongs. The script is your best ally; it runs once and stores results efficiently. The expression layer is powerful but expensive, so use it selectively. Keeping sheets focused, choosing Set Analysis over row-level IF conditions, controlling dimension cardinality, and using Aggr() only, when necessary, will collectively make a measurable difference in how your applications feel to end users.
These practices are not just about speed, they are also about building applications that scale gracefully as data volumes grow and user counts increase.
Know more:
Aggr - chart function | Qlik Cloud Help
Sum Script function | Qlik Cloud Help
Set analysis | Qlik Cloud Help
if Script and chart function | Qlik Cloud Help
FloorScript and chart function | Qlik Cloud Help
Table of Contents
The following release notes cover the versions of Qlik GeoAnalytics Data Package released in April 2026. For questions or comments, post in the Product Forums or contact Qlik Support.
A complete change log can be found in the readme file included in the release zip file.
See the system requirements for GeoAnalytics Server.
Qlik GeoAnalytics Data Package 26.04 zip file can be found on the Product Downloads under the category "Qlik Data Analytics".
The installation guide is available as an attachment for this document.
About Qlik
Qlik converts complex data landscapes into actionable insights, driving strategic business outcomes. Serving over 40,000 global customers, our portfolio provides advanced, enterprise-grade AI/ML, data integration, and analytics. Our AI/ML tools, both practical and scalable, lead to better decisions, faster. We excel in data integration and governance, offering comprehensive solutions that work with diverse data sources. Intuitive analytics from Qlik uncover hidden patterns, empowering teams to address complex challenges and seize new opportunities. As strategic partners, our platform-agnostic technology and expertise make our customers more competitive.
If you are a Snowflake customer you have probably seen the left side of this image frequently. Snowflake Intelligence is legit cool and you've dreamed of ways for it to impact your business.
If you are a Qlik and Snowflake customer you have probably seen the left side of this image frequently, and thought "Wow I sure wish I could take advantage of Snowflake Intelligence within my Qlik environment to impact my business. Feel free to do your celebration dance because this post is designed to walk you through how Qlik can work with Snowflake Cortex AISQL as well as Snowflake Cortex Agents (API).
Both series are designed in 3 part Show Me format. The first video for each will frame the value you can attain. The second video will help you drool as you begin imagining your business implementing the solution. Finally I conclude each series for those that get tapped on their shoulder to actually make the solutions work.
In this comprehensive three-part series exploring the integration of Qlik and Snowflake Cortex AI-SQL, I guide viewers from executive vision to hands-on implementation. While demonstrating how organizations can democratize AI capabilities across their entire analytics ecosystem—without requiring data science expertise.
This series demonstrates how to combine Qlik's associative analytics engine with Snowflake's AI-powered semantic intelligence to transform natural language questions into interactive, fully contextualized insights.
Heck yeah we've got both covered.
Healthcare Synthetic Data Set -> Semantic View -> Build Qlik Sense Application through Claude and Qlik MCP - This demo begins by pulling the information out of a Semantic View for the shared Healthcare Synthetic data set. Huge tables. Constructs the code to load the tables into Qlik Sense including concatenating the Patient and Encounters fact tables and creating concatenated keys for the dimensional tables. What about all of that wonderful metadata about the fields? Yeah we pull that in as well because governance is important. Then we build Master Dimensions for all of the fields with that as well, including the sample values. Now data modelers/designers can see the data and end users can see it all so they know they can trust the answers and act on them. Chef Qlik Dork and Chef Claude were really cooking in the kitchen for this one.
PS - This was the beginning of the application. See Video 5 - Show me How It Works above to see the final application and how it interacts with Snowflake Cortex Agent API for the full end user experience of awesomeness. I'm talking about the results of questions being displayed as charts, tables and users can see the SQL that was generated. The data returned is ingested into the Qlik Data Model so users can then filter to the records returned and see all of the details to answer their next 10 questions. What if they asked about big data tables that aren't loaded into Qlik Sense? No problem we go pull that data live.
As you know by now MCP servers are essentially invisible. They provide super human, highly performant tasks, but they a visual host. While the Synthetic Healthcare video demonstration above used Claude as the user interface, now that Snowflake has officially released Coco. I mean Cortex Code. I figured I better ensure our joint partners could do their happy dance and take advantage of both of these leading edge power tools.
Previously I created a post called Creating your Secret Sauce. In which I described the process of creating and using #skills. Guess what? The same skill files that I created and shared for Claude can be imported and used directly by CoCo. You gotta be loving that.
The videos in these courses subtly demonstrate the power of using skills to enhance the prompts. My skill for Master Items ensures that their naming convention is user friendly. When I ask to create a sheet ... the skill transforms it into "let's create a story that is prepared with love" instead of microwaving random mystery charts onto a sheet just for speed.
🎥 Course 405 - Cortex Code generating Master Dimensions and Master Measures in Qlik Sense via Qlik MCP
🎥 Course 410 - Cortex Code generating a sheet inside of Qlik Sense via Qlik MCP
The Show Me How to Build It videos for both series will refer to other resources. I thought about making you crawl on your desktop and squint in order to see the URLS and then make you hand type 50 characters from memory. Then I thought it's not going to be much fun for either of us since I wouldn't actually see you doing it. Fortunately for you I've included the needed resources below.
Calling Snowflake Cortex Agent API within Qlik Sense
Creating a REST API Connection for Snowflake Cortex Agent
Operatives, this is Dork, 007 Dork reporting from Q Division headquarters. Unlike my data, I prefer my Mountain Dew shaken, not stirred.
THE SITUATION: The dreaded Dashboard Disruption Monkey Gang is on the loose again, and we need to find which agent has experience dealing with them. Fast.
YOUR MISSION: Create your first Qlik Answers knowledge base to track Q Division agent dossiers, then build an assistant that can query this intelligence on demand.
DELIVERABLE: A fully functional knowledge base containing agent information with an assistant capable of answering questions about your operatives.
What You'll Need:
Download Mission Pack: 📥 You will find the QlikAnswers_SwarmAgents_Dossiers.zip attached
Video Intelligence Briefing: 🎥 Watch the Full Mission Walkthrough
First, let's set up your training environment. Navigate to your Qlik hub and create a space called "Q Division Field Academy" (or use an existing space - this is your headquarters for all future training missions).
Once in your space, navigate to the Answers section in the hub where you'll see options for assistants, knowledge bases, data connections, and file uploads.
Click to create a new knowledge base and name it: "Agent Information"
Pro tip: Normally you'd add a detailed description here. For field training, we're moving fast, but in production you'd want to document what this knowledge base contains and its intended use.
You'll see three options for data sources:
Unzip the mission pack you downloaded and you'll find 7 agent dossier PDFs.
Simply drag and drop all 7 files into the upload area. You should see all seven appear in your upload queue.
Click "Upload" and watch as Q Division's finest get cataloged into your system.
⚠️ FIELD NOTE: Here's where rookies often get tripped up!
After upload, check the Index Status. It will say "Never been indexed" - this means the RAG (Retrieval-Augmented Generation) system hasn't parsed your documents yet. You cannot query unindexed data.
Click "Index All" and switch to the flat view to watch the progress. With only a handful of pages per dossier, this should complete in seconds.
Refresh your screen. When you see "Index Status: Completed" with a timestamp, your intelligence is ready for deployment.
Now let's build an assistant that can query this knowledge base.
Click to create a new assistant and name it: "Field Training Assistant"
Add your Agent Information Knowledge Base.
We'll cover conversation starters in a future module - these are pre-written prompts that help users know what questions to ask.
Time to validate your setup. Ask your assistant:
"Which agent has interacted with the dreaded Dashboard Disruption Monkeys?"
Watch the reasoning panel (this is where the magic happens):
Click on the citation link and it will jump directly to the source document, highlighting exactly where that information was found in the dossier.
What You've Accomplished:
Validation Check: Can you ask your assistant "Which agent dealt with the Dashboard Disruption Monkeys?" and get back "Assembler Agent" with a citation? If yes, mission accomplished! 🎯
Challenge Exercise (Optional): Try asking other questions about your agents. What skills do they have? What operations have they completed? Test the limits of what your knowledge base knows!
What's happening with RAG and indexing?
When you upload PDFs to a knowledge base, Qlik Answers uses RAG (Retrieval-Augmented Generation) to:
Until indexing completes, the content is just raw files - the AI can't "see" it yet. Think of indexing as translating your documents into a language the AI agents understand.
Understanding the Agent Reasoning Flow
Qlik Answers uses multiple specialized agents:
Answer Agent: The orchestrator. Receives your question, determines what data sources are needed, coordinates other agents, and formats the final response.
Knowledge Base Agent: Specialized in searching unstructured documents. Uses semantic search to find relevant passages and return citations.
This multi-agent approach allows each specialist to do what it does best, similar to how Q Division has different operatives with different skills!
Every answer includes citations showing exactly where the information came from. This is critical for:
In enterprise analytics, "because the AI said so" doesn't cut it. Citations provide the audit trail.
You've successfully completed your first Q Division field training mission. You're now equipped to turn unstructured documents into queryable intelligence using Qlik Answers.
Remember: In analytics, as in espionage, the right question is more valuable than a thousand answers.
Dork, 007 Dork, signing off. Keep your data shaken and your queries stirred.
Questions? Feedback? Spotted a Dashboard Disruption Monkey? 👎 Use the feedback button or reach out to your Q Division training coordinator
If users on your Qlik Cloud tenant are experiencing frequent WebSocket disconnections (Error 1006) and your organization has an IP Allow List configured, IPv6 may be the root cause. Qlik Cloud's IP Allow List feature currently only supports IPv4 addresses. When users connect via IPv6, their traffic does not match any entry in the allow list, causing the connection to be dropped.
Symptoms
Users experience random disconnections while working in Qlik Cloud apps. The error code reported is Error 1006 (WebSocket disconnection). The issue affects multiple users simultaneously and is not tied to a specific app or sheet. The problem is more prevalent after network changes or in environments where IPv6 is enabled by default.
Resolution
Qlik has confirmed they are adding native IPv6 support to the IP Allow List feature. The estimated delivery is between Q4 2026 and early 2027. You can follow the progress and vote on the ideation item here:
Workaround
Work with your IT or network team to disable or restrict IPv6 on the network infrastructure used to access Qlik Cloud. This forces client connections to use IPv4, which is properly handled by the IP Allow List. Once IPv6 traffic is restricted, users should no longer experience WebSocket disconnections caused by this issue.
Cause
Qlik Cloud's IP Allow List feature only evaluates IPv4 addresses. When a user's device connects over IPv6, the source address does not match any entry in the allow list, and the connection is rejected or dropped at the WebSocket layer, resulting in Error 1006. This is a known platform limitation. Qlik has confirmed they are adding IPv6 support to the IP Allow List feature, with an estimated delivery between Q4 2026 and early 2027.
Related Content
Qlik Help: IP Allow List configuration: https://help.qlik.com/en-US/cloud-services/Subsystems/Hub/Content/Sense_Hub/Admin/mc-configure-ip-allowlist.htm
When using Qlik Cloud with Qlik's built-in Identity Provider (Qlik IDP), deactivating or deleting a user's corporate email address does not automatically remove or disable their access to Qlik Cloud. This is a common misconception that can create a security gap during employee offboarding.
Symptoms
A user's corporate email has been deactivated or their account deleted from the corporate directory. However, the user still appears as active in Qlik Cloud Management Console and their license seat remains occupied. The user may still be able to log in to Qlik Cloud if they have an active session or a previously set password.
Resolution
Administrators must manually remove or disable the user directly in Qlik Cloud. To do this, go to Management Console, navigate to Users, find the user and either remove them or change their role to No Access. This should be added as a required step in your organization's employee offboarding checklist.
If your organization requires automatic user deprovisioning, the permanent solution is to replace Qlik IDP with an external Identity Provider such as Microsoft Entra ID (Azure AD) or Okta, configured with SCIM provisioning. With SCIM enabled, when a user is disabled or removed in your corporate directory, Qlik Cloud is automatically notified and the user's access is revoked without any manual intervention.
Cause
Qlik IDP is a standalone identity store built into Qlik Cloud. It manages user accounts independently from any external corporate directory. Because there is no synchronization between Qlik IDP and external systems, changes made outside of Qlik Cloud such as deactivating an email or removing an Active Directory account have no effect on the user's Qlik Cloud account.
Environment
If your Qlik Cloud tenant hostname was recently renamed (for example, from company-old.us.qlikcloud.com to company-new.us.qlikcloud.com) and users are now receiving OAuth errors when connecting the Qlik Excel Add-in, this article explains what is happening and how to resolve it.
Symptoms
Users open Excel, launch the Qlik Add-in, and receive one of the following errors during authentication:
OAUTH-1: redirect_uri is not registered (Status 400)
OAUTH-14: OAuth client is not authorized
These errors appear even though the add-in was working before the tenant rename.
Resolution
Important: If users access the tenant through both the original URL and an alias, add both to the allowed origins field. Missing either one will cause authentication to fail.
Cause
The Excel Add-in manifest.xml and its OAuth client are created as a pair and are tied to a specific tenant hostname. When the tenant is renamed, the existing OAuth client retains the old hostname in its redirect URIs and allowed origins, triggering OAUTH-1. If the manifest is recreated but the OAuth client configuration still has incorrect settings, OAUTH-14 follows. The manifest file cannot be edited to point to a new hostname. It must be regenerated from a newly created OAuth client on the correct tenant.
Related Content
If you were looking for a super deep technical explanation of each of the Qlik Answers Agentic Agents ... you've come to the wrong place.
As part of my Q Division training series I wanted humanize each of these #AI titans for you just a little bit. After all just seeing their names on the screen while they are working hard on your behalf is really impersonal.
This is from an official presentation about the Agentic Agents that are part of Qlik Answers.
But as the Dork, 007 Dork, I assumed you would want a little more understanding. So, each of their, totally fictitious, and hilarious, dossiers is attached.
Simply download the zip file and spend as many hours laughing as you read each and every page about each of the agents. No self-destructing. The files will remain as long as you want. Page after page of fun, mixed in with occasional insight.
But ensure their protection! They are highly classified and for your eyes as a Q Division operative in training only.
When working with large datasets, loading everything in a single query is rarely an option. When the source can't return all the data at once, you need to break the load into steps: by date, by region, by file. That's exactly what loops are for in Qlik Sense script. They reduce code volume, make the load process manageable, and allow you to build incremental ETL pipelines that refresh only the required data slice without overloading the source.
Qlik Sense has four types of loops, and each has its own use case.
FOR..NEXT: when the number of iterations is known in advance. The period is defined through a variable storing the difference between dates. Since the counter starts at 0, we use -1 to avoid an extra iteration:
LET vDaysCount = Date#(vEndDate,'YYYY-MM-DD')
- Date#(vStartDate,'YYYY-MM-DD');
FOR i = 0 TO $(vDaysCount) - 1
LET vDate = Date(Date#(vStartDate,'YYYY-MM-DD') + i, 'YYYY-MM-DD');
// query body
NEXT i
FOR EACH..NEXT: iterates over a list of values: regions, file names, product codes. The list can be defined explicitly or generated using functions like FileList(), DirList(), or FieldValueList():
FOR EACH vRegion IN 'UZB', 'KAZ', 'RUS'
LOAD * FROM [lib:///_$(vRegion).qvd](qvd);
NEXT vRegion
WHILE: used inside a LOAD statement together with AutoGenerate and the IterNo() function. The classic example is generating a Master Calendar:
TempCalendar:
LOAD Date($(vMinDate) + IterNo() - 1, 'YYYY-MM-DD') as tDate
AutoGenerate 1
While $(vMinDate) + IterNo() - 1 <= $(vMaxDate);
The condition is evaluated on each row via IterNo(): this is row generation inside a single LOAD statement.
DO WHILE: a loop with an updatable variable. The condition is re-evaluated before each iteration. Used for incremental loading from an external source with a date-by-date breakdown:
LET vEndDate = Date(Today(), 'YYYY-MM-DD');
LET vStartDate = Date(Today() - 7, 'YYYY-MM-DD');
LET vIterationDate = vStartDate;
DO WHILE vIterationDate <= vEndDate
// query body
WHERE date_field = toDate('$(vIterationDate)')
STORE TempOneDay INTO [lib://…/data_$(vQVDName).qvd](qvd);
DROP TABLE TempOneDay;
LET vIterationDate = Date(Date#(vIterationDate,'YYYY-MM-DD') + 1, 'YYYY-MM-DD');
LOOP