Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
This article will walk you through how to create a declarative agent for Copilot that uses an MCP plugin to connect MSFT 365 Copilot with Qlik’s MCP Server. Please note, there are several prerequisites that you must meet to successfully execute the steps in this guide.
Note: at the time of writing, using plugins to connect to an MCP Server is in public preview.
Once you complete these steps, Agents Toolkit generates the required files for the agent and opens a new Visual Studio Code window with the agent project loaded.
http://127.0.0.1:33418, http://127.0.0.1:33418/, and https://vscode.dev/redirect are redirect URLs for VS Code used for development and testing.https://teams.microsoft.com/api/platform/v1.0/oAuthRedirect will be the redirect URL when the plugin is provisioned and deployed.
Please follow Microsoft’s guidance on Publishing agents for Microsoft 365 Copilot.
If you are a Snowflake customer you have probably seen the left side of this image frequently. Snowflake Intelligence is legit cool and you've dreamed of ways for it to impact your business.
If you are a Qlik and Snowflake customer you have probably seen the left side of this image frequently, and thought "Wow I sure wish I could take advantage of Snowflake Intelligence within my Qlik environment to impact my business. Feel free to do your celebration dance because this post is designed to walk you through how Qlik can work with Snowflake Cortex AISQL as well as Snowflake Cortex Agents (API).
Both series are designed in 3 part Show Me format. The first video for each will frame the value you can attain. The second video will help you drool as you begin imagining your business implementing the solution. Finally I conclude each series for those that get tapped on their shoulder to actually make the solutions work.
In this comprehensive three-part series exploring the integration of Qlik and Snowflake Cortex AI-SQL, I guide viewers from executive vision to hands-on implementation. While demonstrating how organizations can democratize AI capabilities across their entire analytics ecosystem—without requiring data science expertise.
This series demonstrates how to combine Qlik's associative analytics engine with Snowflake's AI-powered semantic intelligence to transform natural language questions into interactive, fully contextualized insights.
Heck yeah we've got both covered.
Healthcare Synthetic Data Set -> Semantic View -> Build Qlik Sense Application through Claude and Qlik MCP - This demo begins by pulling the information out of a Semantic View for the shared Healthcare Synthetic data set. Huge tables. Constructs the code to load the tables into Qlik Sense including concatenating the Patient and Encounters fact tables and creating concatenated keys for the dimensional tables. What about all of that wonderful metadata about the fields? Yeah we pull that in as well because governance is important. Then we build Master Dimensions for all of the fields with that as well, including the sample values. Now data modelers/designers can see the data and end users can see it all so they know they can trust the answers and act on them. Chef Qlik Dork and Chef Claude were really cooking in the kitchen for this one.
PS - This was the beginning of the application. See Video 5 - Show me How It Works above to see the final application and how it interacts with Snowflake Cortex Agent API for the full end user experience of awesomeness. I'm talking about the results of questions being displayed as charts, tables and users can see the SQL that was generated. The data returned is ingested into the Qlik Data Model so users can then filter to the records returned and see all of the details to answer their next 10 questions. What if they asked about big data tables that aren't loaded into Qlik Sense? No problem we go pull that data live.
As you know by now MCP servers are essentially invisible. They provide super human, highly performant tasks, but they a visual host. While the Synthetic Healthcare video demonstration above used Claude as the user interface, now that Snowflake has officially released Coco. I mean Cortex Code. I figured I better ensure our joint partners could do their happy dance and take advantage of both of these leading edge power tools.
Previously I created a post called Creating your Secret Sauce. In which I described the process of creating and using #skills. Guess what? The same skill files that I created and shared for Claude can be imported and used directly by CoCo. You gotta be loving that.
The videos in these courses subtly demonstrate the power of using skills to enhance the prompts. My skill for Master Items ensures that their naming convention is user friendly. When I ask to create a sheet ... the skill transforms it into "let's create a story that is prepared with love" instead of microwaving random mystery charts onto a sheet just for speed.
🎥 Course 405 - Cortex Code generating Master Dimensions and Master Measures in Qlik Sense via Qlik MCP
🎥 Course 410 - Cortex Code generating a sheet inside of Qlik Sense via Qlik MCP
The Show Me How to Build It videos for both series will refer to other resources. I thought about making you crawl on your desktop and squint in order to see the URLS and then make you hand type 50 characters from memory. Then I thought it's not going to be much fun for either of us since I wouldn't actually see you doing it. Fortunately for you I've included the needed resources below.
Calling Snowflake Cortex Agent API within Qlik Sense
Creating a REST API Connection for Snowflake Cortex Agent
Operatives, this is Dork, 007 Dork reporting from Q Division headquarters. Unlike my data, I prefer my Mountain Dew shaken, not stirred.
THE SITUATION: The dreaded Dashboard Disruption Monkey Gang is on the loose again, and we need to find which agent has experience dealing with them. Fast.
YOUR MISSION: Create your first Qlik Answers knowledge base to track Q Division agent dossiers, then build an assistant that can query this intelligence on demand.
DELIVERABLE: A fully functional knowledge base containing agent information with an assistant capable of answering questions about your operatives.
What You'll Need:
Download Mission Pack: 📥 You will find the QlikAnswers_SwarmAgents_Dossiers.zip attached
Video Intelligence Briefing: 🎥 Watch the Full Mission Walkthrough
First, let's set up your training environment. Navigate to your Qlik hub and create a space called "Q Division Field Academy" (or use an existing space - this is your headquarters for all future training missions).
Once in your space, navigate to the Answers section in the hub where you'll see options for assistants, knowledge bases, data connections, and file uploads.
Click to create a new knowledge base and name it: "Agent Information"
Pro tip: Normally you'd add a detailed description here. For field training, we're moving fast, but in production you'd want to document what this knowledge base contains and its intended use.
You'll see three options for data sources:
Unzip the mission pack you downloaded and you'll find 7 agent dossier PDFs.
Simply drag and drop all 7 files into the upload area. You should see all seven appear in your upload queue.
Click "Upload" and watch as Q Division's finest get cataloged into your system.
⚠️ FIELD NOTE: Here's where rookies often get tripped up!
After upload, check the Index Status. It will say "Never been indexed" - this means the RAG (Retrieval-Augmented Generation) system hasn't parsed your documents yet. You cannot query unindexed data.
Click "Index All" and switch to the flat view to watch the progress. With only a handful of pages per dossier, this should complete in seconds.
Refresh your screen. When you see "Index Status: Completed" with a timestamp, your intelligence is ready for deployment.
Now let's build an assistant that can query this knowledge base.
Click to create a new assistant and name it: "Field Training Assistant"
Add your Agent Information Knowledge Base.
We'll cover conversation starters in a future module - these are pre-written prompts that help users know what questions to ask.
Time to validate your setup. Ask your assistant:
"Which agent has interacted with the dreaded Dashboard Disruption Monkeys?"
Watch the reasoning panel (this is where the magic happens):
Click on the citation link and it will jump directly to the source document, highlighting exactly where that information was found in the dossier.
What You've Accomplished:
Validation Check: Can you ask your assistant "Which agent dealt with the Dashboard Disruption Monkeys?" and get back "Assembler Agent" with a citation? If yes, mission accomplished! 🎯
Challenge Exercise (Optional): Try asking other questions about your agents. What skills do they have? What operations have they completed? Test the limits of what your knowledge base knows!
What's happening with RAG and indexing?
When you upload PDFs to a knowledge base, Qlik Answers uses RAG (Retrieval-Augmented Generation) to:
Until indexing completes, the content is just raw files - the AI can't "see" it yet. Think of indexing as translating your documents into a language the AI agents understand.
Understanding the Agent Reasoning Flow
Qlik Answers uses multiple specialized agents:
Answer Agent: The orchestrator. Receives your question, determines what data sources are needed, coordinates other agents, and formats the final response.
Knowledge Base Agent: Specialized in searching unstructured documents. Uses semantic search to find relevant passages and return citations.
This multi-agent approach allows each specialist to do what it does best, similar to how Q Division has different operatives with different skills!
Every answer includes citations showing exactly where the information came from. This is critical for:
In enterprise analytics, "because the AI said so" doesn't cut it. Citations provide the audit trail.
You've successfully completed your first Q Division field training mission. You're now equipped to turn unstructured documents into queryable intelligence using Qlik Answers.
Remember: In analytics, as in espionage, the right question is more valuable than a thousand answers.
Dork, 007 Dork, signing off. Keep your data shaken and your queries stirred.
Questions? Feedback? Spotted a Dashboard Disruption Monkey? 👎 Use the feedback button or reach out to your Q Division training coordinator
This article assumes you understand:
* Modifying the Qlik NPrinting Repository will void the Support Agreement! Changes to the database should be performed by the Qlik NPrinting services only or through the use of supported API calls.
Overview
If, for some reason, you lost the Administrator role for your Qlik NPrinting Admin account and you are unable to change settings in the Web Console, you can follow this article to restore administrative access to the target account.
Symptoms
You are unable to edit/change any Administrative settings.
Resolution
You can force insert/update the user in the database with the administrator role id.
The first step is to connect to the database using pgAdmin.
The second step is to identify the administrator role id using pgAdmin.
In my test environment, the administrator role equals "6b800774-db4f-4b5e-a975-24e5b86b5ece".
The third step is to identify the user role ID using pgAdmin.
In my test environment, the administrator role equals "1628ad97-75af-4cb3-897f-da1cb07777e5".
The fourth step is to update the "role_recipient" table.
Use the following command line, replacing the information you previously retrieved.
Insert into public.role_recipient VALUES ('user role id','administrator role id');
In my scenario, it will be:
Insert into public.role_recipient VALUES ('1628ad97-75af-4cb3-897f-da1cb07777e5','6b800774-db4f-4b5e-a975-24e5b86b5ece');
Click "Execute script" or press F5.
Go back to Web Console and refresh the page. The target user should have Admin privileges.
Environment
If users on your Qlik Cloud tenant are experiencing frequent WebSocket disconnections (Error 1006) and your organization has an IP Allow List configured, IPv6 may be the root cause. Qlik Cloud's IP Allow List feature currently only supports IPv4 addresses. When users connect via IPv6, their traffic does not match any entry in the allow list, causing the connection to be dropped.
Symptoms
Users experience random disconnections while working in Qlik Cloud apps. The error code reported is Error 1006 (WebSocket disconnection). The issue affects multiple users simultaneously and is not tied to a specific app or sheet. The problem is more prevalent after network changes or in environments where IPv6 is enabled by default.
Resolution
Qlik has confirmed they are adding native IPv6 support to the IP Allow List feature. The estimated delivery is between Q4 2026 and early 2027. You can follow the progress and vote on the ideation item here:
Workaround
Work with your IT or network team to disable or restrict IPv6 on the network infrastructure used to access Qlik Cloud. This forces client connections to use IPv4, which is properly handled by the IP Allow List. Once IPv6 traffic is restricted, users should no longer experience WebSocket disconnections caused by this issue.
Cause
Qlik Cloud's IP Allow List feature only evaluates IPv4 addresses. When a user's device connects over IPv6, the source address does not match any entry in the allow list, and the connection is rejected or dropped at the WebSocket layer, resulting in Error 1006. This is a known platform limitation. Qlik has confirmed they are adding IPv6 support to the IP Allow List feature, with an estimated delivery between Q4 2026 and early 2027.
Related Content
Qlik Help: IP Allow List configuration: https://help.qlik.com/en-US/cloud-services/Subsystems/Hub/Content/Sense_Hub/Admin/mc-configure-ip-allowlist.htm
When using Qlik Cloud with Qlik's built-in Identity Provider (Qlik IDP), deactivating or deleting a user's corporate email address does not automatically remove or disable their access to Qlik Cloud. This is a common misconception that can create a security gap during employee offboarding.
Symptoms
A user's corporate email has been deactivated or their account deleted from the corporate directory. However, the user still appears as active in Qlik Cloud Management Console and their license seat remains occupied. The user may still be able to log in to Qlik Cloud if they have an active session or a previously set password.
Resolution
Administrators must manually remove or disable the user directly in Qlik Cloud. To do this, go to Management Console, navigate to Users, find the user and either remove them or change their role to No Access. This should be added as a required step in your organization's employee offboarding checklist.
If your organization requires automatic user deprovisioning, the permanent solution is to replace Qlik IDP with an external Identity Provider such as Microsoft Entra ID (Azure AD) or Okta, configured with SCIM provisioning. With SCIM enabled, when a user is disabled or removed in your corporate directory, Qlik Cloud is automatically notified and the user's access is revoked without any manual intervention.
Cause
Qlik IDP is a standalone identity store built into Qlik Cloud. It manages user accounts independently from any external corporate directory. Because there is no synchronization between Qlik IDP and external systems, changes made outside of Qlik Cloud such as deactivating an email or removing an Active Directory account have no effect on the user's Qlik Cloud account.
Environment
If your Qlik Cloud tenant hostname was recently renamed (for example, from company-old.us.qlikcloud.com to company-new.us.qlikcloud.com) and users are now receiving OAuth errors when connecting the Qlik Excel Add-in, this article explains what is happening and how to resolve it.
Symptoms
Users open Excel, launch the Qlik Add-in, and receive one of the following errors during authentication:
OAUTH-1: redirect_uri is not registered (Status 400)
OAUTH-14: OAuth client is not authorized
These errors appear even though the add-in was working before the tenant rename.
Resolution
Important: If users access the tenant through both the original URL and an alias, add both to the allowed origins field. Missing either one will cause authentication to fail.
Cause
The Excel Add-in manifest.xml and its OAuth client are created as a pair and are tied to a specific tenant hostname. When the tenant is renamed, the existing OAuth client retains the old hostname in its redirect URIs and allowed origins, triggering OAUTH-1. If the manifest is recreated but the OAuth client configuration still has incorrect settings, OAUTH-14 follows. The manifest file cannot be edited to point to a new hostname. It must be regenerated from a newly created OAuth client on the correct tenant.
Related Content
If you were looking for a super deep technical explanation of each of the Qlik Answers Agentic Agents ... you've come to the wrong place.
As part of my Q Division training series I wanted humanize each of these #AI titans for you just a little bit. After all just seeing their names on the screen while they are working hard on your behalf is really impersonal.
This is from an official presentation about the Agentic Agents that are part of Qlik Answers.
But as the Dork, 007 Dork, I assumed you would want a little more understanding. So, each of their, totally fictitious, and hilarious, dossiers is attached.
Simply download the zip file and spend as many hours laughing as you read each and every page about each of the agents. No self-destructing. The files will remain as long as you want. Page after page of fun, mixed in with occasional insight.
But ensure their protection! They are highly classified and for your eyes as a Q Division operative in training only.
When working with large datasets, loading everything in a single query is rarely an option. When the source can't return all the data at once, you need to break the load into steps: by date, by region, by file. That's exactly what loops are for in Qlik Sense script. They reduce code volume, make the load process manageable, and allow you to build incremental ETL pipelines that refresh only the required data slice without overloading the source.
Qlik Sense has four types of loops, and each has its own use case.
FOR..NEXT: when the number of iterations is known in advance. The period is defined through a variable storing the difference between dates. Since the counter starts at 0, we use -1 to avoid an extra iteration:
LET vDaysCount = Date#(vEndDate,'YYYY-MM-DD')
- Date#(vStartDate,'YYYY-MM-DD');
FOR i = 0 TO $(vDaysCount) - 1
LET vDate = Date(Date#(vStartDate,'YYYY-MM-DD') + i, 'YYYY-MM-DD');
// query body
NEXT i
FOR EACH..NEXT: iterates over a list of values: regions, file names, product codes. The list can be defined explicitly or generated using functions like FileList(), DirList(), or FieldValueList():
FOR EACH vRegion IN 'UZB', 'KAZ', 'RUS'
LOAD * FROM [lib:///_$(vRegion).qvd](qvd);
NEXT vRegion
WHILE: used inside a LOAD statement together with AutoGenerate and the IterNo() function. The classic example is generating a Master Calendar:
TempCalendar:
LOAD Date($(vMinDate) + IterNo() - 1, 'YYYY-MM-DD') as tDate
AutoGenerate 1
While $(vMinDate) + IterNo() - 1 <= $(vMaxDate);
The condition is evaluated on each row via IterNo(): this is row generation inside a single LOAD statement.
DO WHILE: a loop with an updatable variable. The condition is re-evaluated before each iteration. Used for incremental loading from an external source with a date-by-date breakdown:
LET vEndDate = Date(Today(), 'YYYY-MM-DD');
LET vStartDate = Date(Today() - 7, 'YYYY-MM-DD');
LET vIterationDate = vStartDate;
DO WHILE vIterationDate <= vEndDate
// query body
WHERE date_field = toDate('$(vIterationDate)')
STORE TempOneDay INTO [lib://…/data_$(vQVDName).qvd](qvd);
DROP TABLE TempOneDay;
LET vIterationDate = Date(Date#(vIterationDate,'YYYY-MM-DD') + 1, 'YYYY-MM-DD');
LOOP
At first glance, a KPI is just a number on a dashboard. But depending on the task, cards can vary significantly in structure. I'd highlight several groups:
Today I want to focus on monitoring KPIs where you can immediately see not just the current value, but also the dynamic: delta to the previous period and a trend line.
In Qlik Sense, I build these cards using an HTML Box. It's not a separate widget it's a string expression where HTML is assembled via & concatenation, and Qlik calculations are embedded directly inside the string:
='<div style="font-family:Arial; padding:12px;">' &
'<div style="font-size:11px; color:#888;">OPERATIONS COUNT</div>' &
'<b style="font-size:28px;">' & Num(sum(cnt)/1e6, '#.##0,00') & ' M</b>' &
'<span style="color:' & If(wow > 0, '#2E8B57', '#B22222') & ';">' &
Num(wow, '+##0,0%;-##0,0%') & ' WoW' &
'</span>' &
'</div>'
The delta color is calculated dynamically with If()construction directly inside the style attribute. Writing this HTML manually isn't necessary: ChatGPT handles it well, just describe the layout you want.
Pro tip for reuse: use variables and master measures. To create a second KPI with the same design, simply copy the block and replace the master measure and variables using Find & Replace.
The card with a trend line at the bottom is built from three objects inside a Layout Container:
The result looks like a fully polished product design using only the built-in tools of Qlik Sense.
Dashboard example built in Qlik Sense showing 6 KPI card types: Monitoring (value + delta + trend line), Multi-metric (primary metric with additional metrics), Status (with background color), Comparative (two dimensions side by side), Progress (value + % of target), and Single (standalone number).
Welcome to Q Division Headquarters, Operative.
Behind these doors lies the future of AI-powered analytics. Qlik Answers isn't just another tool—it's your answer intelligence platform that lets anyone ask natural language questions of their unstructured data, their structured data, or both working together in perfect coordination.
Your mission, should you choose to accept it: Master the Q Division agent swarm architecture, earn your Field Operative certification, and deploy answer intelligence across your organization.
Inside, you'll meet our specialist agents, complete hands-on training exercises, watch live mission playbacks, and prove your tactical intelligence through the Agent Recognition Protocol.
By the time you exit these doors, you won't just understand Qlik Answers—you'll be ready to implement your (or guide others through) first deployments with confidence.
The briefing room awaits. Enter when ready, Operative.
Agent Roster
Q Division operates on what the intelligence community calls a "swarm architecture" – the industry gold standard for AI agent collaboration. Instead of relying on a single agent to handle every mission, we've assembled a specialized team where each agent excels at their specific domain. When you ask a question, our system intelligently identifies which agents have the expertise needed and orchestrates a precision handoff sequence to deliver the most accurate answer.
Think of it like a real intelligence agency: you wouldn't send the same operative to handle cryptography, field reconnaissance, AND financial analysis – you'd send specialists who work together, each completing their part of the mission before passing critical intelligence to the next agent. That's exactly what Qlik Answers does, ensuring you get enterprise-grade accuracy through expert collaboration. Meet the agents who'll be working your missions:
Operation: Swarm Intelligence - Agent Dossiers ►
Welcome to active duty, Operative. In this section, you'll receive the same intelligence assets that Q Division uses in live operations: a fully configured Qlik Answers application, pre-loaded knowledge bases, and the Answer Assistant framework that orchestrates our agent swarm. This isn't a simulation – these are production-grade materials that you'll download, deploy, and interrogate with real questions. You'll see firsthand how questions flow through the agent network, learn to craft queries that leverage each agent's expertise, and build the muscle memory needed to guide others through their first Qlik Answers deployment. By the end of these exercises, you won't just understand the theory – you'll have hands-on experience running actual missions.
Unstructured Data
🎯 Q Division Field Training: Module 2 - Application Documentation - Building another Knowledge Base and enhancing your Assistant wit the additional knowledge.
🎯 Q Division Field Training: Module 3 - Expense Statements - Building an enterprise grade Knowledge Base with Advanced Chunking and enhancing your Assistant with the additional knowledge
Structured Data
🎯 Q Division Field Training: Module 100 - Uploading Q Division Operations Application - Asking Answers
Unstructured + Structured
🎯 Q Division Field Training: Module 200 - Dashboard Inconsistency - Asking Questions of Unstructured and Structured Data in your Assistant
Field Operative, it's time to see the agents in action. In this section, you'll watch live mission recordings where real questions trigger the full agent swarm workflow. On one side of your screen, you'll see the complete Qlik Answers solution being constructed in real-time. On the other, you'll see which Answer Agent is currently on mission – giving you a visual understanding of who does what, when they're called into action, and how they hand off intelligence to the next specialist in the sequence. Here's where it gets powerful: since you already have the same materials from Field Training, you can run these exact same questions in your own Qlik Answers environment and watch your agents work the mission alongside mine. These aren't just recordings to watch passively – they're your playbook for getting comfortable with the agent workflow before you guide others through it.
| Mission Playback - Review 1 - Counter Terrorism | In this Q Division mission playback, an auditor spots a suspicious English-Russian dictionary expense on case 1006—a Swedish healthcare analytics mission that had nothing to do with Russian operations. Using Qlik Answers, the auditor conducts a comprehensive counter-terrorism review that reveals Agent 099 (Alec Trevelyan) purchased suspicious items including the dictionary from an oddly-named bookstore, multiple alcoholic beverages at "Cafe Pushkin" (named after a famous Russian poet), and celebrated with expensive champagne before the case was even closed. It's a masterclass in how conversational analytics can connect the dots between structured databases and unstructured documents to surface anomalies that traditional reporting would miss. |
Final assessment, Field Operative. Before you earn your Clearance Level certification, you need to prove you can recognize which agents handle which intelligence requests. We'll present you with real-world questions – the kind partners and customers will actually ask – and you'll identify which Answer Agent(s) will be deployed on the mission. This isn't about memorizing definitions; it's about developing the tactical instinct to know instantly: "That's a Data Agent question," or "This one needs both Knowledge and Visualization working together." Pass the Agent Recognition Protocol, and you'll have earned more than a certification – you'll have the operational confidence to guide anyone through their first Qlik Answers deployment
Take the Agent Recognition Protocol Test
Operatives, this is Dork 007 Dork reporting from Q Division headquarters. We have a situation that keeps every intelligence analyst up at night: Numbers that don't add up.
THE SITUATION: The original programmer for our Q Division application has left the company. A talented new designer built a beautiful dashboard, but there's a critical problem: Active Cases (16) + Closed Cases (15) = 31... but Total Cases shows 50. In an intelligence agency where no mission can be left behind, this kind of inconsistency is unacceptable.
YOUR MISSION: Use Qlik Answers to investigate the application and notes from the original programmer. In other words Structured Data -AND- Unstructured Data
DELIVERABLE: A dashboard that accurately reflects all case statuses with proper reconciliation, plus an understanding of how to use AI to diagnose and fix data model inconsistencies.
What You'll Need:
Video Intelligence Briefing: 🎥 Watch the Full Mission Walkthrough
Navigate to your Q Division Field Academy catalog and locate your Field Training Assistant. This time, instead of adding another knowledge base, we're going to add structured data from your application itself.
Click to add content to your assistant, but instead of selecting a knowledge base, choose your Q Division application. Make sure you're in your Q Division Field Academy space and select the application you uploaded in Mission 100.
Pro tip: Once configured, your assistant can now query both unstructured data (knowledge bases with PDFs) AND structured data (your application's data model). This hybrid capability is what makes Qlik Answers so powerful for investigative work.
Now comes the moment of truth. Ask your assistant:
"Please tell me the total number of cases we have, the total closed cases we have, and the total open cases we have. Can you help me understand why there seems to be an inconsistency with the totals? Because what I see in my application seems wrong. The programmer's no longer with our company."
Watch the multi-agent collaboration unfold in the reasoning panel:
⚠️ FIELD NOTE: Pay close attention to this workflow - it's the heart of how Qlik Answers investigates data issues.
The Answer Agent reveals the mystery: there's a hidden "Resolved" status in your data model containing 19 cases!
The complete breakdown:
The Explanation: "Resolved" represents cases where agents completed their work, but haven't yet received client confirmation to officially close them. It's essentially a holding status between active work and final closure.
The assistant even suggests intelligent follow-up questions like "What is the breakdown of the 19 resolved cases by priority level?" or "How many days on average do cases spend in resolved status before closing?"
Now that you understand the problem, ask your assistant to fix it:
"Can you add a KPI to my dashboard to show me the number of resolved cases so that our totals will look correct?"
Watch another beautiful agent collaboration:
Return to your Q Division application. The AI has created a new sheet called "Case Status Dashboard" with the Resolved Cases KPI displaying the value 19.
Important: Since your original dashboard is a public sheet, the AI created a new private sheet rather than modifying the public one. To complete the mission, you'll need to manually:
Now your dashboard shows: Closed (15) + Resolved (19) + Active (16) = Total (50) ✓
What You've Accomplished:
✓ Connected an application to your Qlik Answers assistant
✓ Used Qlik Answers to diagnose a dashboard inconsistency using programmer documentation
✓ Discovered a hidden status field causing reconciliation issues
✓ Generated a new KPI visualization through natural language
✓ Understood multi-agent workflows combining semantic search, data analysis, and dashboard authoring
Validation Check: Can your dashboard now reconcile to 50 total cases with all statuses visible? If yes, mission accomplished!
Challenge Exercise (Optional): Ask your assistant to build a text object for your dashboard that explains the case status values.
For this mission, the Qlik Answers Assistant needed access to both the structured application and it's data as well as the unstructured programmer notes.
This mission showcased all of the agentic agents that Qlik Answers uses:
Answers Agent: Orchestrates the entire process: Decomposes the questions users ask into sub-tasks, and presents final findings back to users.
Knowledge Agent: Searches through vector database to get relevant unstructured data
Semantic Agent: Expert at understanding your data model's structure, field names, and relationships. It's like a data dictionary that speaks human language.
Data Analyst Agent: Designs analysis packages that specify what data to use, how to analyze it and what output best answers the question(s).
Chart Agent: Specializes in visualization design after receiving the analysis package from the data analyst agent.
Dashboard Authoring Agent: Consumes outputs from the answer agent, data analyst agent or chart agent, investigates if existing required charts already exist.
Each agent is optimized for its specific task. When they collaborate, you get both depth and accuracy.
The scenario in this mission - programmer leaves, new designer inherits an application, numbers don't reconcile - happens constantly in enterprises. Traditional approaches require:
With Qlik Answers, you simply ask what's wrong and let the AI investigate the data model itself. The time savings are enormous, but more importantly, you don't lose critical business logic when people change roles.
This specific scenario - a holding status between "active" and "closed" - appears in many business processes:
These intermediate states are often invisible in executive dashboards, leading to the exact reconciliation problems we solved in this mission. The pattern repeats across industries because workflows are rarely binary (open/closed) - they have nuanced intermediate states that matter for operations but get overlooked in reporting.
You've successfully completed Field Training Mission 200 and learned how to use Qlik Answers as your data detective. As I mentioned in the video, I honestly never suspected a day when AI could auto-transcribe programmer notes, let alone use those notes alongside the application itself to solve data mysteries. But here we are, operatives. The future of analytics isn't just faster dashboards - it's intelligent systems that understand your data as well as the people who built it.
Questions? Feedback? Have you ever encountered a situation like this in the past where you had to take over maintenance of an application and something didn't add up?
"Let's make it simple" - one recipe at a time
Just like real chefs each of you has your own secret ingredients that make your Qlik work delicious and that people can recognize. Your secret sauce goes beyond just throwing random objects on the screen. It goes beyond just slapping a Select * of tables into your load script and data model. It goes beyond making up new expressions in charts.
But your secret sauce takes time to prepare.
I know mine sure does.
Because it's manual. If I want things to be a certain way, or look a certain way I have to spend the time. This post and the video are to encourage you that when you enter the Claude chat sessions, you don't have to go alone.
You can predefine your secret sauce so that's always at the ready. Taking the great meal you have in your head, that Chef Claude helps you prepare, and have your secret sauce added to it.
Your secret sauce prevents Claude from doing what you just asked because you were in a hurry.
Your secret sauce provides the boundaries in which Claude will work, and ensure that what you generate will follow your approved standard.
While you don't want Claude to follow a "just sling it on the screen" methodology, you also don't want to have to do this each and every time:
Like any new person that might join your team ... you want to have Claude follow your 99 explicit - gold standard guidelines, without having to type them in. That means taking them to teach him the skills needed to ensure your standard is followed. Just as you would teach Fred, Sally, Suzie or Bob.
Each of the following skills represents a critical ingredient in the art of making complex analytics deliciously simple. No matter how much of a rush is on you.
Building reusable, governed analytics components
This skill governs how master dimensions and master measures are created, documented, and maintained in Qlik applications. It establishes a governance framework that treats master items as reusable, governed analytics building blocks that must be thoroughly documented with descriptions, tags, business context, and calculation logic. The skill defines when to create master items versus ad-hoc fields, emphasizes rich metadata that helps users understand what they're using, and establishes naming conventions that make items discoverable. It covers expression patterns for measures including proper aggregation contexts, handles dimension creation with drill-down hierarchies, and ensures that master items follow the same field naming standards as the load script. The skill transforms master items from simple field lists into a governed analytics vocabulary that enforces consistency across all sheets and visualizations while making it easier for users to self-serve.
🎯 Chef's Philosophy: Master items are your mise en place - prepare once, use everywhere. Good governance starts with well-documented, consistently named building blocks that anyone can understand and reuse. 📊
Audience-driven dashboard design methodology
This skill implements Qlik Dork's audience-driven workflow methodology for building Qlik sheets and dashboards. It starts by identifying the audience type (Financial, Clinical, Operations, or other domain-specific roles) and transforms metrics to match that audience's motivation and mental model. The skill follows a structured workflow: audience identification → metric transformation → context parameter collection → template selection → sheet building using a Story→Data→Visuals approach. It emphasizes that different audiences need the same data presented differently based on their decision-making context and priorities. The skill includes template selectors for common use cases, design patterns for effective visualizations, and ensures that dashboards tell a clear story rather than just dumping data on the screen. It transforms sheet creation from "what charts should I add?" into a strategic design process that starts with understanding who needs to make what decisions and works backward from there.
Standardized data loading patterns
This skill establishes the foundational rules for generating Qlik load scripts that connect to Snowflake and transform data correctly. It mandates a critical "stop and ask first" workflow - you must gather information about the audience, data grain, required fields, and business context before writing any code. The skill defines specific syntax patterns including the Snowflake connection format using LIB CONNECT, the preceding LOAD pattern for transformations, and strict field naming conventions using table prefixes (like fct_adm_admission_id). It covers date handling standards using Floor() for clean date fields, calendar key creation as integers, and proper table aliasing with square brackets. The skill also includes YAML-based code generation patterns, validation workflows using qlik_create_data_object to verify field existence, and emphasizes the "one wrong decimal = lost trust" philosophy where accuracy always trumps speed.
Quality control before deployment
This skill provides a secondary validation process to verify calculations are correct before declaring work complete. It acts as a quality control checkpoint that prevents common mistakes like Sum() versus Count() errors from reaching end users. The skill defines specific validation workflows to check measure calculations, dimension values, filter logic, and data model relationships. It establishes a systematic review process that catches errors before they erode trust, reinforcing the "one wrong decimal = lost trust" philosophy. The skill triggers after creating any calculated measures, KPIs, or complex expressions, serving as the final quality gate before presenting work to users. It's essentially a "trust but verify" framework that ensures analytical accuracy through structured verification steps rather than hoping you got it right the first time.
Critical thinking framework as Chief Question Officer
This skill establishes rules for how to answer data analysis questions in a way that promotes critical thinking and data literacy. It requires transparency about assumptions, defaults, and data interpretation choices rather than just providing answers. The skill mandates explaining the "why" behind analytical decisions - why certain filters were applied, why specific aggregations were chosen, why particular time periods were used. It transforms simple question-answering into an educational process where users learn to think more critically about their own data queries. The skill prevents the "black box" problem where users get answers without understanding the logic behind them, and instead builds their analytical capabilities by making the reasoning transparent. It's designed to teach users to ask better questions rather than just accepting whatever answer comes back.
Professional branding and delivery standards
A simple, straightforward skill that ensures Claude uses official Qlik brand colors when creating PowerPoint presentations. This skill provides the exact RGB and hex values for all six Qlik brand colors (Green, Blue, Aqua, Blended Green, Fuscia, and Deep Purple) along with ready-to-use Python code snippets for python-pptx implementation.
Perfect for anyone who needs to create Qlik-branded presentations and wants consistent, accurate color usage every time. Just upload this skill to Claude, and it will automatically reference these colors when building your decks.
What's included:
No fluff, no complicated guidelines - just the colors you need to stay on-brand.
In this video I demonstrate how these skills turn the bland, into the sublime each and every time. Not only does the agentic nature of Claude working with Qlik MCP save you time, as you will see it can ensure that your gold standard is followed every single time.
Even though we both know there are occasions you don't follow your rules yourself due to time.
THE SITUATION: You've proven yourself with unstructured data across Field Training Modules 1-3. You've mastered knowledge bases, advanced chunking, and contextual intelligence. Now it's time to shift gears. The real power of Q Division emerges when you combine unstructured documentation WITH structured data that resides in live applications.
YOUR MISSION: Get access to the Q Division Application - the actual case tracking system that our operatives use in the field. Upload it to your environment, make it available to Qlik Answers, and ask questions about the structured data within. You'll meet two NEW agents: the Semantic Agent and the Data Analyst Agent.
THE MYSTERY: Figure out exactly what the "Active Case Percentage" KPI showing?
DELIVERABLE: A fully indexed Qlik application connected to Qlik Answers, with the ability to ask natural language questions about structured data and receive detailed explanations.
PREREQUISITES: ⚠️ Complete Modules 1-3 first to understand knowledge bases and unstructured data. This module introduces structured data as a contrast and foundation for Module 200 where we'll combine BOTH!
What You'll Need:
Download Mission Pack: 📥 Q Division Application attached
Video Intelligence Briefing: 🎥 Watch the Full Mission Walkthrough
Navigate to your Q Division Field Academy space in your Qlik hub.
Click to upload a file and select the QDivision.qvf file that you downloaded from the community page.
The upload process should take just a few seconds. Once complete, you'll see the Q Division application appear in your space.
Click to open the application - let's take a quick tour before we make it available to Qlik Answers.
⚠️ FIELD NOTE: This is the KEY step that connects structured data to AI!
Here's something you need to know about ANY of your apps - whether it's this Q Division application we just uploaded, or one that you already have on your site:
You must explicitly enable Qlik Answers access in the application settings.
Why? Not all of your applications should be made available. When you're doing development on things, you don't want to confuse users with 18 versions of the same application. Governance matters!
Here's how to enable it:
Watch what happens:
You'll notice a message appear saying "Indexing this application for you..."
This is similar to knowledge base indexing, but instead of parsing PDFs, it's analyzing:
Once it's done, you'll see a message: "Indexing complete"
The toggle will flip all the way and turn green ✓
You're ready to go!
We can start jumping into Answers, but let's hold off on pressing that button just for a second. Let's go to the dashboard - because you know the dashboard is the place to be!
Take a look at this very well-thought-out application:
What you'll see:
🤔 WAIT A MINUTE...
I don't know about you, but in 007 Dork's mind, that should be 31. What in the world? That doesn't add up!
I wish we had the original programmer to understand these values and what we might be missing...
🎯 SPOILER ALERT: Don't worry, operatives! In Field Mission 200, we're going to take a look at this because that is obviously a problem. And what we're gonna do in Mission 200 is look at questioning structured AND unstructured data together. If you'll recall from Module 2, I set this up for this very scenario when we talked about case status values! wink wink
For now, let's move on to the next question...
Go to the screen labeled "Field Mission 100" in the application.
You'll see it tells us that our Active Case Percentage is 16%.
Now that'd be nice if users could just... ask a question and get that KPI defined for them, beyond just the brief description in the Master Item, right?
Well, guess what? THEY CAN!
From within the application, click to open Qlik Answers
I'm going to paste this question right in here (because you've seen my typing, and we don't have all day):
"Can you describe for me in detail what the Active Case Percentage value is?"
Now watch the magic happen...
Not surprising, the Answer Agent is the first one to show up. Whether it's structured data or unstructured data, the Answer Agent is ALWAYS gonna be first, trying to figure out what you're looking to do.
But now watch as it goes through different processes behind the scenes...
The Flow:
1️⃣ Answer Agent (Orchestrator)
2️⃣ Semantic Agent ← NEW AGENT ALERT!
3️⃣ Data Analyst Agent ← ANOTHER NEW AGENT!
4️⃣ Answer Agent (Response Synthesis)
Look at what the AI returns:
"Active Case Percentage is a key performance indicator in our Q Division data. It shows you the percentage of cases that are currently Open or In Progress."
Notice what it did:
Business Context: "This metric helps Q Division management understand workload distribution and resource allocation needs."
Interpretation Guidelines:
Related Metrics You Might Want to See:
🤯 MIND. BLOWN.
The AI didn't just tell you what the KPI is - it gave you HOW to interpret it, WHAT benchmarks to consider, and WHAT other metrics complement it!
Click to expand "Show reasoning" or "Show details" to see the full agent workflow.
You'll see the complete conversation:
This transparency is CRITICAL for:
What You've Accomplished:
Validation Check: Can you ask your Q Division application questions about KPIs and get back detailed explanations including calculations, business context, and interpretation guidelines? If yes, you've mastered structured data querying! 🎯
The Mystery Remains: We still haven't solved why 15 + 16 = 50 instead of 31. And THAT, operatives, is where Module 200 comes in...
Challenge Exercise (Optional): Ask other questions about the Q Division application:
See how the Semantic Agent and Data Analyst Agent work together to provide comprehensive answers. Don't be surprised if a new Q Division assists!
You've successfully shifted gears from unstructured to structured data. You've met two new agents who specialize in understanding your applications' vocabulary and interpreting what your KPIs mean in business context.
Questions? Feedback? Are you becoming more comfortable with the Agentic experience of agents working together?
The name's Dork. 007 Dork. They say you're only as good as your questions. Well, lucky for you, I never miss.
THE SITUATION: Missions one and two involved rather static data. Today we're dealing with something much more near real-time: expense statements. After each Q Division mission closes, travel and expense statements get generated and logged. But something caught my eye while walking through accounting... a 4TB external hard drive and an encrypted USB drive on an agent's expense report. That seems a little "sus" to me, operatives. Double agent? Corporate espionage? Or legitimate operational expense?
YOUR MISSION: Build a knowledge base around agent expense statements using enterprise storage connections (like Amazon S3 buckets where files are constantly being inserted), enable advanced accuracy for complex multi-page tables, add this intelligence to your Field Training Assistant, and then test a theory about whether we've got a rogue agent on our hands.
DELIVERABLE: An enhanced assistant with three knowledge bases that can perform forensic accounting analysis across complex expense documents, understanding context that spans multiple pages and document sections.
PREREQUISITES: ⚠️ You must complete Modules 1 & 2 first! You'll need your existing Field Training Assistant with Agent Information and Application Design knowledge bases already configured.
What You'll Need:
Download Mission Pack: 📥 QDivision_Field_Expense_Reports.zip attached
Video Intelligence Briefing: 🎥 Watch the Full Mission Walkthrough
Navigate to your Answers section from your hub (not your applications - we don't want to see all that other junk cluttering our intelligence operations).
Click to create a new knowledge base and name it: "Agent Expense Statements"
Normally you'd add a description here, but we're field operatives on a mission, so let's keep moving!
⚠️ FIELD NOTE: This is NEW and IMPORTANT!
Before adding files, toggle on the "Enhanced Accuracy" flag.
Why? These expense statements contain:
Enhanced accuracy uses advanced chunking to handle these complex document structures. It takes a bit longer to process, but when you see what's in these expense statements, you'll absolutely understand why we need it.
The Trade-off:
For expense forensics, we need enhanced accuracy. Period.
Here's where it gets interesting. I'm going to show you TWO approaches:
Click "Add from connection" and select your space.
Choose your connection - in my case, "Q Division Expense Statements" which points to an Amazon S3 bucket.
The Power of Connections: When looking at enterprise file storage connectors, you can set up filters:
The Magic: As new expense statements get dropped into your S3 bucket, they can be indexed without manual uploads.
You can either:
In this scenario, the files stay in S3 - they're not copied to your Qlik tenant. You're indexing references to enterprise storage.
I would now select each expense statement from my bucket and add them to my knowledge base.
But wait... my poor field operatives out there don't have access to MY S3 bucket, and I'm not giving you my secret key! That's classified information!
So here's what we're ACTUALLY going to do:
Click "Add files" and choose "Browse" to upload files directly.
Navigate to your unzipped mission pack containing the Q Division expense statement PDFs.
Select ALL 15 expense statement files and upload them.
You'll see them loading into your knowledge base.
I'm not even going to suggest it because you're going to wag your finger at me...
These are NOT indexed yet. You are NOT ready to use these until you've indexed them!
Click "Index All"
Now, because we turned on Enhanced Accuracy, this is going to take longer than our previous modules. Don't panic!
What's happening behind the scenes:
Watch the progress. You'll start seeing files complete their indexing. Keep scrolling to monitor status.
Wait for completion: You should see 39 pages across 15 different documents indexed.
Refresh and verify: "Index Status: Completed" with a recent timestamp (if it says "5 weeks ago" when you come back in 5 weeks, we've got problems!).
We're NOT creating another assistant. We want to tie ALL this intelligence together!
Navigate back to the Answers catalog. With all those files and knowledge bases accumulating, use the filter to show "Assistants and Knowledge Bases only" so you can find what you need.
Open your "Field Training Assistant" (the one you created in Module 1 and enhanced in Module 2).
Click to add content, then select "Add a knowledge base".
Filter to your "Q Division Field Academy" space.
Select "Agent Expense Statements" - notice it's the only one NOT grayed out (the others are already connected).
Click to add it.
Boom. Your assistant is now ready. Everything is indexed. Your agent has three knowledge bases:
This is business, operatives!
Expand your assistant chat interface. I'm going to paste this question because you've seen my typing in other modules - it can be pretty bad:
"I reviewed the expense statement in the knowledge base for Case 103, and it seems suspicious to me that the agent purchased hard drives and a USB. Does that raise any red flags with you? Are they understandable?"
Before we see the AI's response, let me show you what I saw...
Scroll through the expense statements for Case 103. You'll find on November 16th:
I don't know why an agent who's out in the field getting wined and dined and meeting with clients is buying hard drives! That raises a red flag to me. If I were a human auditor reading this, that seems a little flaky!
Now watch what happens...
You already guessed it - yes, the Answer Agent is on the job right away:
The Knowledge Base Agent gets involved next:
I came in here assuming this agent was up to no good. I happened to read an expense statement. I think something's flaky. This is crazy!
But here's what the AI comes back with:
"The hard drive and USB are legitimate operational expenses. They don't raise any red flags. Let me give you the context: The case involved Operation False Precision, a data center investigation. These purchases are justified given that they were conducting forensic analysis of an ETL pipeline and code. All expenses comply with operational requirements."
WAIT, WHAT?!
When I jumped in to show you those suspicious expense line items, I didn't show you the mission notes at the top of the expense statement that documented the investigation activities and timeline!
Let me be crystal clear about what just happened, operatives:
This isn't just a search-and-find operation.
The other modules had super easy questions. I want you to understand the logic going on here - the collective wisdom of the world that's in that large language model that Qlik Answers is sitting on top of.
Here I am trying to do forensic accounting. I get the wisdom of the world saying:
"Whoa, whoa, whoa, 007 Dork! You've got binoculars on and you are FOCUSED on that expense line, and that is NOT what you need to see. You need to see the BIG PICTURE of what was going on!"
The AI was able to interpret from both contexts in a knowledge graph - these things are related:
It connected the dots across multiple pages and document sections.
Operatives, you gotta be loving that! If you're not ready to dig in even deeper, I don't know what's gonna get you excited about Qlik Answers!
What You've Accomplished:
The Big Lesson: As a young Dork, I found that my focus could be very narrow. I would see one piece of information and jump to conclusions. If there's one thing I've learned here in Q Division, it's that the real story usually involves the ability to see a much larger context - one that's larger than even my Dork brain can handle.
Asking Qlik Answers is my way of ensuring that all the elements are being accounted for, and that in conjunction with the collective wisdom of the world in that large language model, the answers to my questions make me look a whole lot smarter.
Your Mission (Should You Choose to Accept It): Ask ONE better question today.
Next Mission: Module 4 will introduce data connections to live Qlik applications, combining structured data WITH all this unstructured intelligence. The Q Division Operation Data application will finally be revealed!
Challenge Exercise (Optional): See how far we can push this concept. Ask your assistant the following question: "The agent from Case 1006 returned from the mission acting a little sus, and an English-Russian dictionary slipped out of his attaché case. Please Conduct a comprehensive counter terrorism review of the expenses for case 1006 and please flag anything suspicious or out of scope for the mission, the target or the environment."
Why Use Storage Connections Instead of Direct Upload?
How Context Understanding Actually Works
When I asked about the suspicious hard drive purchase, here's what happened technically:
Step 1: Query Processing
Step 2: Vector Search
Step 3: Knowledge Graph Assembly
Step 4: Collective Wisdom Application
Step 5: Response Generation
This is NOT:
This IS:
The Difference Between Search and Intelligence
Traditional Search Would Return:
Qlik Answers Returns:
Real-World Application: This is the difference between:
Use Cases Where This Matters:
You've graduated from simple document retrieval to contextual intelligence that understands relationships across complex multi-page documents. You've seen firsthand how AI can provide the larger context that even a focused analyst might miss.
Your Field Training Assistant now has three knowledge bases working in harmony. It can answer questions about agents, application design, AND financial operations - all with citations and contextual understanding.
Remember: The most dangerous weapon in your arsenal isn't a golden gun - it's a golden question! And sometimes, the best answer is the one that shows you what you DIDN'T know to ask about.
Dork 007 Dork, signing off. Keep your chunking advanced and your context windows wide.
Questions? Feedback? What did the challenge question yield for you in terms of results and which agent do we need to investigate if Qlik Answers confirmed anything suspicious? 👎 Use the feedback button or share your forensic accounting stories
#QlikAnswers #QlikSense #DataAnalytics #BusinessIntelligence #AIAssistant #RAG #QlikDork #QDivision #007Dork #AdvancedChunking #EnhancedAccuracy #ForensicAccounting #ContextualAI #SemanticSearch #DocumentIntelligence #AIReasoning #KnowledgeGraph #AskBetterQuestions
The name's Dork. 007 Dork, and I have a license to question.
THE SITUATION: A customer has experienced turnover in their development department. There's yelling. Lots of yelling. "WHAT DO THESE VALUES MEAN?!" echoes through the halls. Dashboard fields are a mystery. Status codes are hieroglyphics. Chaos reigns.
YOUR MISSION: Add unstructured application design documentation to the Qlik catalog, create a new knowledge base from those existing files, and enhance your Field Training Assistant with this intelligence. Then query the original programming team's documentation to understand what those mysterious values actually mean.
DELIVERABLE: An enhanced assistant that can answer questions about both agent information AND application design documentation, demonstrating how knowledge bases can share files and grow in power over time.
PREREQUISITES: ⚠️ You must complete Module 1 first! You'll need your existing Field Training Assistant from that mission.
What You'll Need:
Download Mission Pack: 📥 ZIP file attached
Video Intelligence Briefing: 🎥 Watch the Full Mission Walkthrough
This time, we're taking a different approach. Instead of uploading files directly to a knowledge base, we're adding them to the Qlik catalog first. Why? Because catalog files can be shared across multiple knowledge bases and assistants - upload once, use everywhere!
Navigate to your Qlik hub and locate the file upload area.
⚠️ CRITICAL: At the bottom of the upload dialog, verify the space selector is set to "Q Division Field Academy" - do NOT let it default to your personal space!
Unzip your mission pack and drag and drop both PDFs into the upload area:
Watch as they load into your catalog. You now have 2 additional files in your Q Division Field Academy space.
Click to create a new knowledge base and name it: "Q Division Application Design"
This knowledge base will contain information about how your Q Division application (which you'll build in a future module) was originally designed by the programming team.
Here's where it gets interesting. In Module 1, we walked through browsing and uploading files directly. But now those files already exist in your catalog!
Instead of the "Upload files" option, choose "Use from catalog".
Filter to show files from "Q Division Field Academy" space.
Select both documents:
Click to add them to your knowledge base.
If you're wagging your finger at the screen right now saying "But 007 Dork, you forgot to index!" - EXCELLENT! You're learning!
Navigate to your new "Q Division Application Design" knowledge base and check the index status. It will show "Never been indexed."
Click "Index All" and wait for completion. Each document should index quickly.
Refresh and verify: "Index Status: Completed" ✓
Here's the power move. We're NOT creating a new assistant. We're making your existing assistant smarter by adding a second knowledge base to it.
Navigate to the Answers section and find your assistants.
Pro tip: If you're lost in a sea of documents, use the filter at the top to show "Assistants only" - you'll recognize them by their distinctive icons.
Open your "Field Training Assistant" that you created in Module 1.
You'll see it already has access to the "Agent Information" knowledge base (it's even grayed out to show it's already connected).
Now click to add another knowledge base. Filter to "Q Division Field Academy" space.
Select "Q Division Application Design" and add it.
Boom. Your assistant now has access to TWO knowledge bases. This is how assistants grow in power over time!
Time to test your enhanced intelligence network. Expand the assistant chat interface and ask:
"Can you help me understand how agent case status is tracked and what the values mean?"
Watch the reasoning panel (because we're operatives, not civilians):
You should receive information about the case status phases:
Pay special attention to this distinction: "Resolved" means the agent's work is done, but the case is NOT physically closed yet because we're waiting on client verification.
🎯 FIELD NOTE FOR FUTURE MISSIONS: Remember this distinction between Resolved and Closed! In an advanced training module, we're going to revisit this question when building applications. If you're not aware of this difference, your case number counts won't add up correctly, and you'll be scratching your head wondering why. Hint, hint, wink, wink!
Click on the citations to see exactly where in the documentation this information came from. Jump directly to the source documents if you want to read more context.
What You've Accomplished:
Validation Check: Can you ask your Field Training Assistant questions about BOTH agent information (Module 1) AND application design (Module 2)? If yes, your assistant is now multi-talented! 🎯
The Big Picture: You've just learned how to scale your Q Division intelligence network. As you find more sources of unstructured data - design docs, meeting transcripts, training videos, data dictionaries, programmer notes - you can add them as new knowledge bases. Your assistants grow smarter over time without starting from scratch.
Challenge Exercise (Optional): One of the documents now available to you in the assistant describes the application. Find out how much you can learn about the application data model and master items without reading the PDF. You will soon earn the privilege of accessing the Q Division operational data application.
Why Upload to Catalog vs. Direct to Knowledge Base?
Catalog Upload Benefits:
You've successfully enhanced your Field Training Assistant with application design intelligence. Over time, your assistants can grow in power as you find more and more sources of unstructured data and create more and more knowledge bases.
Your assistant started with only the ability to answer questions about agents themselves. Now it also supports questions about application design from the original programming staff - an application that will be revealed to you soon, operatives. But let's face it: you have more training to do before you're entrusted with Q Division Operation Data.
Remember: The most dangerous weapon in your arsenal isn't a golden gun - it's a golden question! 🎯
Dork 007 Dork, signing off. Keep your documentation indexed and your queries semantic.
Questions? Feedback? In this AI powered world with meeting transcripts automatically generated, what do you think of this idea of a knowledge base that stores them? Have you had situations in the past where a developer had left and you really could have used access to their original documentation? 👎
Forget microwaved analytics. In these courses, you'll learn to build AI-assisted Qlik applications and dining experiences with the precision and care of a master chef. As Sous Chefs in Chef Qlik Dork's kitchen, you'll master all of the features of what Qlik MCP offers you:
⚙️ Data Products - Starting with trusted ingredients (metadata, quality, governance)
⚙️ Building Screens - Plating your creations for YOUR DINERS with story-driven design
⚙️ Building Code - Pushing down predictable, repeatable code to your Qlik ovens
⚙️ Asking Questions - Teaching your diners to become Chief Question Officers
⚙️ Paradigm Shift - Understanding the transformation from builder to orchestrator
As each culinary course is developed they will appear below, but this brief introductory video will help you understand what is coming when Qlik MCP Server functionality is released in your #SaaS environments on February 10, 2026.
The goal of the courses here at the Cordon Green is to help your organization go from an ordinary agentic experience, to one that is EXTRAordinary.
👨🍳 Course 100 - Learning to create your Secret Sauce
Sharing my Secret Sauce to get you started
👨🍳 Course 120 - Defining the Organizational Gold Standard
👨🍳 Course 125 - Helping an LLM understand your context with Smart Defaults
👨🍳 Course 130 - The role of a Chief Skills Officer
👨🍳 Course 200 - Moving from Questions to Conversations
👨🍳 Course 210 - Security and Filters
👨🍳 Course 220 - Context Wars - Google vs LLM vs Human
👨🍳 Course 230 - Metacognitive Analytics - Thinking about Thinking
👨🍳 Course 250 - Chief Question Officer
👨🍳 Course 300 - Discombobulated Sessions in Claude
👨🍳 Course 310 - Chaos to Clarity - Using Qlik MCP for Data Model Documentation
👨🍳 Course 401 - Create Qlik application from Snowflake OSI Semantic View
👨🍳 Course 501 - Create Qlik application from Qlik Data Product