Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
If you are a Snowflake customer you have probably seen the left side of this image frequently. Snowflake Intelligence is legit cool and you've dreamed of ways for it to impact your business.
If you are a Qlik and Snowflake customer you have probably seen the left side of this image frequently, and thought "Wow I sure wish I could take advantage of Snowflake Intelligence within my Qlik environment to impact my business. Feel free to do your celebration dance because this post is designed to walk you through how Qlik can work with Snowflake Cortex AISQL as well as Snowflake Cortex Agents (API).
Both series are designed in 3 part Show Me format. The first video for each will frame the value you can attain. The second video will help you drool as you begin imagining your business implementing the solution. Finally I conclude each series for those that get tapped on their shoulder to actually make the solutions work.
In this comprehensive three-part series exploring the integration of Qlik and Snowflake Cortex AI-SQL, I guide viewers from executive vision to hands-on implementation. While demonstrating how organizations can democratize AI capabilities across their entire analytics ecosystem—without requiring data science expertise.
This series demonstrates how to combine Qlik's associative analytics engine with Snowflake's AI-powered semantic intelligence to transform natural language questions into interactive, fully contextualized insights.
Heck yeah we've got both covered.
Healthcare Synthetic Data Set -> Semantic View -> Build Qlik Sense Application through Claude and Qlik MCP - This demo begins by pulling the information out of a Semantic View for the shared Healthcare Synthetic data set. Huge tables. Constructs the code to load the tables into Qlik Sense including concatenating the Patient and Encounters fact tables and creating concatenated keys for the dimensional tables. What about all of that wonderful metadata about the fields? Yeah we pull that in as well because governance is important. Then we build Master Dimensions for all of the fields with that as well, including the sample values. Now data modelers/designers can see the data and end users can see it all so they know they can trust the answers and act on them. Chef Qlik Dork and Chef Claude were really cooking in the kitchen for this one.
PS - This was the beginning of the application. See Video 5 - Show me How It Works above to see the final application and how it interacts with Snowflake Cortex Agent API for the full end user experience of awesomeness. I'm talking about the results of questions being displayed as charts, tables and users can see the SQL that was generated. The data returned is ingested into the Qlik Data Model so users can then filter to the records returned and see all of the details to answer their next 10 questions. What if they asked about big data tables that aren't loaded into Qlik Sense? No problem we go pull that data live.
As you know by now MCP servers are essentially invisible. They provide super human, highly performant tasks, but they a visual host. While the Synthetic Healthcare video demonstration above used Claude as the user interface, now that Snowflake has officially released Coco. I mean Cortex Code. I figured I better ensure our joint partners could do their happy dance and take advantage of both of these leading edge power tools.
Previously I created a post called Creating your Secret Sauce. In which I described the process of creating and using #skills. Guess what? The same skill files that I created and shared for Claude can be imported and used directly by CoCo. You gotta be loving that.
The videos in these courses subtly demonstrate the power of using skills to enhance the prompts. My skill for Master Items ensures that their naming convention is user friendly. When I ask to create a sheet ... the skill transforms it into "let's create a story that is prepared with love" instead of microwaving random mystery charts onto a sheet just for speed.
🎥 Course 405 - Cortex Code generating Master Dimensions and Master Measures in Qlik Sense via Qlik MCP
🎥 Course 410 - Cortex Code generating a sheet inside of Qlik Sense via Qlik MCP
The Show Me How to Build It videos for both series will refer to other resources. I thought about making you crawl on your desktop and squint in order to see the URLS and then make you hand type 50 characters from memory. Then I thought it's not going to be much fun for either of us since I wouldn't actually see you doing it. Fortunately for you I've included the needed resources below.
Calling Snowflake Cortex Agent API within Qlik Sense
Creating a REST API Connection for Snowflake Cortex Agent
Snowflake recently released what it calls Snowflake Intelligence. It's their User Interface that enables users to directly ask questions of data. Under the covers their interface is interacting with a new Snowflake Cortex Agent API.
Qlik is an official launch partner with Snowflake for this exciting technology as we are able to call that Snowflake Cortex Agent API just like they do. Which means you are able to present visuals to aid with insights, while at the same time allowing end users to ask questions and then present the results that the Cortex Agent API returns.
The intention of this post, is to help you understand the nuances of the Snowflake Cortex Agent API.
Calling the Agent API is super easy. You simply use the REST Connector provided to you in Qlik Sense. Refer to this post to help you create a REST connector to your Snowflake Cortex Agent. Either Qlik Sense Enterprise on Windows or Qlik Talend Cloud Analytics. You will want to ensure you check the Allow "WITH CONNECTION" box so that you can change the body.
To get the REST connector to build a block of script for you, ensure that you set the Response type to CSV and set the Delimiter to be the Tab character.
Eventually you will modify your script to be something like the following where you set the Body to be the question your user wants to ask, rather than it be hardcoded. But who cares?
There is nothing special here and nothing worth writing about that I haven't already covered in other posts. The reason for this post isn't about the connection itself... it's about what the Snowflake Cortex Agent API returns.
Rather than returning a single response, it actually streams a series of events. Notice in the image above to load data from the connection what the results look like. It literally returns the entire "stream of consciousness" if you will, as it is working. Everything it does.
It would be an exercise in futility if I simply talked my way through how to handle a Streaming API in general, and especially how to handle this even stream from Snowflake Cortex Agent. So, while I won't be walking you through all elements of the connection, or how I build the body based on what the user asks as a question ... I do want you to be able to be hands on. The following image illustrates how I used my connection (that does work) to get the event stream and store into the QVD file that is attached to this post.
You will need to:
If you did all of these steps correctly you should be told that 487 rows of data were loaded.
Go to the Event Stream sheet and see all of those wonderful 487 rows that were returned when I called the Snowflake Cortex Agent with the question that I passed it.
Be sure and scroll through all of the rows to really appreciate how much information is returned. When you get to the bottom there are 2 rows that I really want you to focus on. You see all of the other events are simply appetizers for the main course we will focus on for the remainder of this post. They are merely events that let you know things are happening and then the stream says "Hey wake up now here is my official response" in rows 482 and 483.
Now what you need to do is row 483 so that the text box on the right will show you the full value that is returned for the response event.
I'm not going to lie ... the first time I saw that I was a little bit intimidated. Sure seemed to me like the wild west of JSON data. In fact ... I ended up writing a series of posts I called Taming the Wild JSON Data Frontier just to document the process I had to go through in parsing that beast. Be sure you read each of the posts that is part of this so that you have the chops as a data sheriff to deal with this incredible structure.
One thing you should know, if you don't already, is that JSON can be very compact like you see in the response. Which is great for exchanging/storing all of the data. But it is really really hard to understand. I highly recommend you take advantage of any online JSON Formatters that you can find. I use jsonformatter.org. You simply hand it the compact JSON structure, and ask it to format/beautify it .. and voila it becomes much more human readable.
{ I have attached the output in a text file that you can download and view for the remainder of this article if you don't want to take the time right now to actually copy and the beautify the response. }
But I digress the important part is that you now know that the RESPONSE event is the one you care about and that the DATA associated with the RESPONSE has a massive JSON structure that contains all of the information we need to present the response back to the user. So, let's dig in.
Go ahead and return to the load script editor and move the section named "Get the RESPONSE event data" up above the Exit script section so that it can actually be processed.
Before seeing the code you may have thought "There is no way I'm going to be able to magically figure out how to identify the data for the response event." But as usual, Qlik Sense provides some very easy transformations. Logically we want to only pull the data from the entire event stream, if the event before it is "event: response" so that's exactly what we ask to do by using the Previous() function. We don't care at all about the part of the column that has the phrase "data: " in it, so we simply throw it away.
Go ahead and reload the data, now that this section will run and when it completes check the preview window and sure enough ... we have exactly what we want in our JSON table.
If you look at the prettified view of the response data you will see that at the highest level it contains a field called content that is an array.
If you scroll all the way the pretty content you will see that it's actual an array of heterogeneous, or mixed type, objects. Meaning some of the array elements are thinking, some are tool_use, some are tool_result. And to make it worse the tool_result elements aren't even the same.
If that sounds nasty ... don't let it bother you. Again, the entire reason for that series of posts I've already written was to help walk you through all of the types of JSON data that will need to be parsed. To understand the next part of the code be sure to read as well as the posts it points you to.
Parsing: Heterogeneous (Mixed) JSON Arrays
Go back to the load script and move the "Mark the Content we care about" section above the Exit script section and reload the data.
Before I discuss the code, go ahead and preview the Content table to ensure you have the 9 different Content values that were in the array. One of the tool_use rows will have the Question_Record marked to yes and the tool_result record will have the Results_Record marked to Yes.
Logically we do a 2 part load. The first iterates through all of the elements in the content array and pulls out just that elements content. The preceding load that takes place simply uses an Index function to know if the word "charts" is contained in the record and marks a flag accordingly. If we parse a nested set of JSON values from the record and find the question value, then we set that flag accordingly. If you haven't already read the posts I've been begging you to read ... then stop and read them now. That's an order. 😉
The logical question you might have at this point, since I left you hanging, is why in the world I focused on flagging those particular rows of the content array? To understand imagine the end user asking a question of a magical black box that mysteriously just goes off and returns an answer. You've probably heard me say more than once "you can't act on data that you don't trust."
To that end, the Snowflake Cortex Agent API will return the question, as it got rephrased by it's generative AI and it also sends the result SQL that was generated. We just have to look for it in the pretty version of the response. Suddenly the mysterious black box, becomes more transparent. Which is exactly what I want to do ... report to the end user as well as audit the question and the sql.
The results flag is set because the Snowflake Cortex Agent API literally hands us the information we need to create a chart with the results. I'm not kidding. It literally gives us the title for the chart as well as the dimension and measure fields for the chart, then it gives us the values for them. You gotta love that.
Now that you understand what is returned and why I flagged it, let's look at how we pull all of that wonderful information out of what initially seemed like an undecipherable JSON mess. Go back to the load script editor and move the "Get the Question and the Results" section above the "Exit script" section and then reload the data.
We start building the Response table by simply reading the row in the Content table that has the Question_Record flag set to Yes. Getting the Question and the SQL statement to share to the end user is simply a matter of reading the nested JSON path for their values.
Then we need to add a few columns to the Response table, which we will get by reading the row in the Content table that has the Results_Record flag set to Yes. Again pulling the information we want is simply a matter of reading the nested JSON path for those values.
Now that you have reloaded the data to run this, and understand it ... it is time to check out the Preview panel for the Response table. We almost have exactly what we need to present to the end user. We have the Question_Asked, the SQL that was used within the Snowflake Cortex Agent, and we know the Dimension and Measure field names. Finally we have a JSON array of the values.
Now that I know you have read the posts I mentioned, I should be more precise: "Finally we have a JSON Array of Homogeneous Objects." Which is covered in the Parsing: JSON Array (Homogeneous Objects) post.
Go back to the load script editor and simply drag the "Parse out the Values" section above the "Exit script" section and reload the data.
The first thing we need to do is pull the DimensionField and MeasureField names into variables that we can refer to. All we need to do is use the Peek() function.
As you are familiar by now parsing a nested JSON structure is a simple matter of using the JsonGet() function with a pattern of:
Which is straightforward when you know the "field name." In the case of pulling the values out that we will need to present to the end user we don't know what they are. The very nature of what we are doing is asking the Snowflake Cortex Agent a question that the end user will give to us. It will then magically process that question and respond to us. Which is why we needed to extract the field names to variables. Now we simply iterate the array and parse the values by passing the variables.
Now let's check the work by looking at the preview for the Values table we just created. Unless I'm missing something you just rocked the world by converting an event stream of JSON structures into a structured table that is now contained in memory.
How cool would it be if ... Never mind that's crazy!
But it would be cool if we could ... It would be really hard.
Maybe we could so let's talk about it.
Since we do have this data in memory now it would be so cool if we could visualize on the screen for the end user. Right? Forget my event stream of consciousness in getting here .. but it's not really that hard. Go ahead and go to the "View the Results" sheet and you will see something magical.
Go ahead and edit the sheet so that you can see how I created that bar chart. Check that out I simply used those 2 variables that we created and did that hocus-pocus dollar sign expansion magic on them. You gotta love that.
Of course I am going to load data into Qlik Sense so that business users can gain insights at the speed of thought. But let's face it ... I'm lazy and didn't talk to every user about everything they would ever want to know about their data. As a result I didn't build every conceivable chart to show them the answers.
Invoking the Snowflake Cortex Agent lets the users ask questions. Questions that we might not have a chart for yet. Questions that might involve scenarios that would be over the users data literacy or training level to get at.
Oh sure I had fun doing all of the techno mumbo jumbo and sharing that with you. But by invoking it right inside a Qlik Sense dashboard I've now given business users the best of both worlds. Not only can we present their answer to them in a chart, since the values are in-memory it is associated to all of the other data. Meaning business users can interact with the values, all the other visuals will respond and naturally we can take advantage of that green/grey/white experience. While also providing the ability for them to see the aggregated answer they were looking for, but also immediately allowing them to see all of the other details they may need to follow up. Like details of the visits, provider names etc.
This article shows you how to configure two Talend Data Catalog bridges to harvest metadata from Snowflake and trace data lineage. It covers the Snowflake Database (via JDBC) and Snowflake Database SQL DML (DI/ETL) Script (SnowSQL) Talend Data Catalog bridges.
Configure the import setup parameters for harvesting metadata from a Snowflake database in Talend Data Catalog:
Define the following parameters on the Import Setup tab: Host, Warehouse, Databases, and the User and Password for authentication.
Select the schemas to be harvested in the Schema field. This example uses the TDC_TEST, TESTSCHEMA1, and TESTSCHEMA2 schemas.
Go to the Overview tab to view an overview of the objects harvested from the Snowflake database.
This example SnowSQL script contains a create view Data Manipulation Language (DML) statement:
create view IF NOT EXISTS testschema2.v_employees ( employee_id, employee_name ) as select rvalue:employee_id::number, rvalue:employee_name::string from testschema1.employees;
You will harvest metadata from this example SnowSQL script in the steps below.
To configure the parameters for the Snowflake Database SQL DML (DI/ETL) Script (SnowSQL) bridge in Talend Data Catalog:
Define the Directory where the SnowSQL file is located on the Import Setup tab.
Add any necessary Include and Exclude filters to select the files to be harvested.
Go to the Overview tab to view the imported SnowSQL script.
Notice that after harvesting metadata (database objects and SnowSQL) using both bridges and building the configuration, the Architecture Diagram displays a bi-directional arrow, because the same Snowflake database model is the source and destination connection to the database.
Go to the Data Flow tab to view the data flow diagram for the SnowSQL script. Referring to the SnowSQL script, Talend Data Catalog creates a view on one schema (TESTSCHEMA2) with a select statement on another schema (TESTSCHEMA1).
You can also view the data flow diagram for the database object (schema level), which displays the three schemas.
Finally, you can display the data lineage for a column (EMPLOYEE_NAME).