Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
This article will walk you through how to create a declarative agent for Copilot that uses an MCP plugin to connect MSFT 365 Copilot with Qlik’s MCP Server. Please note, there are several prerequisites that you must meet to successfully execute the steps in this guide.
Note: at the time of writing, using plugins to connect to an MCP Server is in public preview.
Once you complete these steps, Agents Toolkit generates the required files for the agent and opens a new Visual Studio Code window with the agent project loaded.
http://127.0.0.1:33418, http://127.0.0.1:33418/, and https://vscode.dev/redirect are redirect URLs for VS Code used for development and testing.https://teams.microsoft.com/api/platform/v1.0/oAuthRedirect will be the redirect URL when the plugin is provisioned and deployed.
Please follow Microsoft’s guidance on Publishing agents for Microsoft 365 Copilot.
If you are a Snowflake customer you have probably seen the left side of this image frequently. Snowflake Intelligence is legit cool and you've dreamed of ways for it to impact your business.
If you are a Qlik and Snowflake customer you have probably seen the left side of this image frequently, and thought "Wow I sure wish I could take advantage of Snowflake Intelligence within my Qlik environment to impact my business. Feel free to do your celebration dance because this post is designed to walk you through how Qlik can work with Snowflake Cortex AISQL as well as Snowflake Cortex Agents (API).
Both series are designed in 3 part Show Me format. The first video for each will frame the value you can attain. The second video will help you drool as you begin imagining your business implementing the solution. Finally I conclude each series for those that get tapped on their shoulder to actually make the solutions work.
In this comprehensive three-part series exploring the integration of Qlik and Snowflake Cortex AI-SQL, I guide viewers from executive vision to hands-on implementation. While demonstrating how organizations can democratize AI capabilities across their entire analytics ecosystem—without requiring data science expertise.
This series demonstrates how to combine Qlik's associative analytics engine with Snowflake's AI-powered semantic intelligence to transform natural language questions into interactive, fully contextualized insights.
Heck yeah we've got both covered.
Healthcare Synthetic Data Set -> Semantic View -> Build Qlik Sense Application through Claude and Qlik MCP - This demo begins by pulling the information out of a Semantic View for the shared Healthcare Synthetic data set. Huge tables. Constructs the code to load the tables into Qlik Sense including concatenating the Patient and Encounters fact tables and creating concatenated keys for the dimensional tables. What about all of that wonderful metadata about the fields? Yeah we pull that in as well because governance is important. Then we build Master Dimensions for all of the fields with that as well, including the sample values. Now data modelers/designers can see the data and end users can see it all so they know they can trust the answers and act on them. Chef Qlik Dork and Chef Claude were really cooking in the kitchen for this one.
PS - This was the beginning of the application. See Video 5 - Show me How It Works above to see the final application and how it interacts with Snowflake Cortex Agent API for the full end user experience of awesomeness. I'm talking about the results of questions being displayed as charts, tables and users can see the SQL that was generated. The data returned is ingested into the Qlik Data Model so users can then filter to the records returned and see all of the details to answer their next 10 questions. What if they asked about big data tables that aren't loaded into Qlik Sense? No problem we go pull that data live.
As you know by now MCP servers are essentially invisible. They provide super human, highly performant tasks, but they a visual host. While the Synthetic Healthcare video demonstration above used Claude as the user interface, now that Snowflake has officially released Coco. I mean Cortex Code. I figured I better ensure our joint partners could do their happy dance and take advantage of both of these leading edge power tools.
Previously I created a post called Creating your Secret Sauce. In which I described the process of creating and using #skills. Guess what? The same skill files that I created and shared for Claude can be imported and used directly by CoCo. You gotta be loving that.
The videos in these courses subtly demonstrate the power of using skills to enhance the prompts. My skill for Master Items ensures that their naming convention is user friendly. When I ask to create a sheet ... the skill transforms it into "let's create a story that is prepared with love" instead of microwaving random mystery charts onto a sheet just for speed.
🎥 Course 405 - Cortex Code generating Master Dimensions and Master Measures in Qlik Sense via Qlik MCP
🎥 Course 410 - Cortex Code generating a sheet inside of Qlik Sense via Qlik MCP
The Show Me How to Build It videos for both series will refer to other resources. I thought about making you crawl on your desktop and squint in order to see the URLS and then make you hand type 50 characters from memory. Then I thought it's not going to be much fun for either of us since I wouldn't actually see you doing it. Fortunately for you I've included the needed resources below.
Calling Snowflake Cortex Agent API within Qlik Sense
Creating a REST API Connection for Snowflake Cortex Agent
"Let's make it simple" - one recipe at a time
Just like real chefs each of you has your own secret ingredients that make your Qlik work delicious and that people can recognize. Your secret sauce goes beyond just throwing random objects on the screen. It goes beyond just slapping a Select * of tables into your load script and data model. It goes beyond making up new expressions in charts.
But your secret sauce takes time to prepare.
I know mine sure does.
Because it's manual. If I want things to be a certain way, or look a certain way I have to spend the time. This post and the video are to encourage you that when you enter the Claude chat sessions, you don't have to go alone.
You can predefine your secret sauce so that's always at the ready. Taking the great meal you have in your head, that Chef Claude helps you prepare, and have your secret sauce added to it.
Your secret sauce prevents Claude from doing what you just asked because you were in a hurry.
Your secret sauce provides the boundaries in which Claude will work, and ensure that what you generate will follow your approved standard.
While you don't want Claude to follow a "just sling it on the screen" methodology, you also don't want to have to do this each and every time:
Like any new person that might join your team ... you want to have Claude follow your 99 explicit - gold standard guidelines, without having to type them in. That means taking them to teach him the skills needed to ensure your standard is followed. Just as you would teach Fred, Sally, Suzie or Bob.
Each of the following skills represents a critical ingredient in the art of making complex analytics deliciously simple. No matter how much of a rush is on you.
Building reusable, governed analytics components
This skill governs how master dimensions and master measures are created, documented, and maintained in Qlik applications. It establishes a governance framework that treats master items as reusable, governed analytics building blocks that must be thoroughly documented with descriptions, tags, business context, and calculation logic. The skill defines when to create master items versus ad-hoc fields, emphasizes rich metadata that helps users understand what they're using, and establishes naming conventions that make items discoverable. It covers expression patterns for measures including proper aggregation contexts, handles dimension creation with drill-down hierarchies, and ensures that master items follow the same field naming standards as the load script. The skill transforms master items from simple field lists into a governed analytics vocabulary that enforces consistency across all sheets and visualizations while making it easier for users to self-serve.
🎯 Chef's Philosophy: Master items are your mise en place - prepare once, use everywhere. Good governance starts with well-documented, consistently named building blocks that anyone can understand and reuse. 📊
Audience-driven dashboard design methodology
This skill implements Qlik Dork's audience-driven workflow methodology for building Qlik sheets and dashboards. It starts by identifying the audience type (Financial, Clinical, Operations, or other domain-specific roles) and transforms metrics to match that audience's motivation and mental model. The skill follows a structured workflow: audience identification → metric transformation → context parameter collection → template selection → sheet building using a Story→Data→Visuals approach. It emphasizes that different audiences need the same data presented differently based on their decision-making context and priorities. The skill includes template selectors for common use cases, design patterns for effective visualizations, and ensures that dashboards tell a clear story rather than just dumping data on the screen. It transforms sheet creation from "what charts should I add?" into a strategic design process that starts with understanding who needs to make what decisions and works backward from there.
Standardized data loading patterns
This skill establishes the foundational rules for generating Qlik load scripts that connect to Snowflake and transform data correctly. It mandates a critical "stop and ask first" workflow - you must gather information about the audience, data grain, required fields, and business context before writing any code. The skill defines specific syntax patterns including the Snowflake connection format using LIB CONNECT, the preceding LOAD pattern for transformations, and strict field naming conventions using table prefixes (like fct_adm_admission_id). It covers date handling standards using Floor() for clean date fields, calendar key creation as integers, and proper table aliasing with square brackets. The skill also includes YAML-based code generation patterns, validation workflows using qlik_create_data_object to verify field existence, and emphasizes the "one wrong decimal = lost trust" philosophy where accuracy always trumps speed.
Quality control before deployment
This skill provides a secondary validation process to verify calculations are correct before declaring work complete. It acts as a quality control checkpoint that prevents common mistakes like Sum() versus Count() errors from reaching end users. The skill defines specific validation workflows to check measure calculations, dimension values, filter logic, and data model relationships. It establishes a systematic review process that catches errors before they erode trust, reinforcing the "one wrong decimal = lost trust" philosophy. The skill triggers after creating any calculated measures, KPIs, or complex expressions, serving as the final quality gate before presenting work to users. It's essentially a "trust but verify" framework that ensures analytical accuracy through structured verification steps rather than hoping you got it right the first time.
Critical thinking framework as Chief Question Officer
This skill establishes rules for how to answer data analysis questions in a way that promotes critical thinking and data literacy. It requires transparency about assumptions, defaults, and data interpretation choices rather than just providing answers. The skill mandates explaining the "why" behind analytical decisions - why certain filters were applied, why specific aggregations were chosen, why particular time periods were used. It transforms simple question-answering into an educational process where users learn to think more critically about their own data queries. The skill prevents the "black box" problem where users get answers without understanding the logic behind them, and instead builds their analytical capabilities by making the reasoning transparent. It's designed to teach users to ask better questions rather than just accepting whatever answer comes back.
Professional branding and delivery standards
A simple, straightforward skill that ensures Claude uses official Qlik brand colors when creating PowerPoint presentations. This skill provides the exact RGB and hex values for all six Qlik brand colors (Green, Blue, Aqua, Blended Green, Fuscia, and Deep Purple) along with ready-to-use Python code snippets for python-pptx implementation.
Perfect for anyone who needs to create Qlik-branded presentations and wants consistent, accurate color usage every time. Just upload this skill to Claude, and it will automatically reference these colors when building your decks.
What's included:
No fluff, no complicated guidelines - just the colors you need to stay on-brand.
In this video I demonstrate how these skills turn the bland, into the sublime each and every time. Not only does the agentic nature of Claude working with Qlik MCP save you time, as you will see it can ensure that your gold standard is followed every single time.
Even though we both know there are occasions you don't follow your rules yourself due to time.