Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Qlik GA: Multivariate Time Series in Qlik Predict: Get Details

Calling Snowflake's Cortex Agent API

cancel
Showing results for 
Search instead for 
Did you mean: 
Dalton_Ruer
Support
Support

Calling Snowflake's Cortex Agent API

Last Update:

Nov 5, 2025 7:01:56 AM

Updated By:

Dalton_Ruer

Created date:

Nov 5, 2025 6:49:56 AM

Attachments

JsonBlogImage.png

 

Snowflake recently released what it calls Snowflake Intelligence. It's their User Interface that enables users to directly ask questions of data. Under the covers their interface is interacting with a new Snowflake Cortex Agent API

Qlik is an official launch partner with Snowflake for this exciting technology as we are able to call that Snowflake Cortex Agent API just like they do. Which means you are able to present visuals to aid with insights, while at the same time allowing end users to ask questions and then present the results that the Cortex Agent API returns. 

The intention of this post, is to help you understand the nuances of the Snowflake Cortex Agent API. 

Calling the Snowflake Cortex Agent API

Calling the Agent API is super easy. You simply use the REST Connector provided to you in Qlik Sense. Either Qlik Sense Enterprise on Windows or Qlik Talend Cloud Analytics. You will want to ensure you check the Allow "WITH CONNECTION" box so that you can change the body. 

Dalton_Ruer_0-1762281450640.png

To get the REST connector to build a block of script for you, ensure that you set the Response type  to CSV and set the Delimiter to be the Tab character. 

Dalton_Ruer_1-1762281731854.png

Eventually you will modify your script to be something like the following where you set the Body to be the question your user wants to ask, rather than it be hardcoded. But who cares?

There is nothing special here and nothing worth writing about that I haven't already covered in other posts. The reason for this post isn't about the connection itself... it's about what the Snowflake Cortex Agent API returns.

Rather than returning a single response, it actually streams a series of events. Notice in the image above to load data from the connection what the results look like. It literally returns the entire "stream of consciousness" if you will, as it is working. Everything it does. 

Handling a Streaming API

It would be an exercise in futility if I simply talked my way through how to handle a Streaming API in general, and especially how to handle this even stream from Snowflake Cortex Agent. So, while I won't be walking you through all elements of the connection, or how I build the body based on what the user asks as a question ... I do want you to be able to be hands on. The following image illustrates how I used my connection (that does work) to get the event stream and store into the QVD file that is attached to this post. 

Dalton_Ruer_0-1762287801626.png

You will need to:

  • Download the CortexAgentStream.qvd file
  • Download the Cortex Agent Event Stream.qvf application
  • In your Qlik Sense environment upload the application
  • Open the application and go to the load script editor
  • Choose the space you want to use to store the QVD file and then open the Datafiles area and drag/drop the CortexAgentStream.qvd so that it is in your environment. 
  • Go to the "Load the entire event stream" section and change the library path where you stored the CortexAgentStream.qvd file. 
  • Go ahead and press Load data. Don't worry

Dalton_Ruer_1-1762288368104.png

If you did all of these steps correctly you should be told that 487 rows of data were loaded. 

Dalton_Ruer_2-1762288588480.png

Go to the Event Stream sheet and see all of those wonderful 487 rows that were returned when I called the Snowflake Cortex Agent with the question that I passed it. 

Dalton_Ruer_3-1762288774808.png

Be sure and scroll through all of the rows to really appreciate how much information is returned. When you get to the bottom there are 2 rows that I really want you to focus on. You see all of the other events are simply appetizers for the main course we will focus on for the remainder of this post. They are merely events that let you know things are happening and then the stream says "Hey wake up now here is my official response" in rows 482 and 483. 

Dalton_Ruer_4-1762288874092.png

Now what you need to do is row 483 so that the text box on the right will show you the full value that is returned for the response event. 

Dalton_Ruer_5-1762289170981.png

Response

I'm not going to lie ... the first time I saw that I was a little bit intimidated. Sure seemed to me like the wild west of JSON data. In fact ... I ended up writing a series of posts I called Taming the Wild JSON Data Frontier just to document the process I had to go through in parsing that beast. Be sure you read each of the posts that is part of this so that you have the chops as a data sheriff to deal with this incredible structure. 

One thing you should know, if you don't already, is that JSON can be very compact like you see in the response. Which is great for exchanging/storing all of the data. But it is really really hard to understand. I highly recommend you take advantage of any online JSON Formatters that you can find. I use jsonformatter.org. You simply hand it the compact JSON structure, and ask it to format/beautify it .. and voila it becomes much more human readable. 

Dalton_Ruer_6-1762289714469.png

 

{ I have attached the output in a text file that you can download and view for the remainder of this article if you don't want to take the time right now to actually copy and the beautify the response. }

But I digress the important part is that you now know that the RESPONSE event is the one you care about and that the DATA associated with the RESPONSE has a massive JSON structure that contains all of the information we need to present the response back to the user. So, let's dig in. 

Go ahead and return to the load script editor and move the section named "Get the RESPONSE event data" up above the Exit script section so that it can actually be processed. 

Dalton_Ruer_7-1762290029532.png

Before seeing the code you may have thought "There is no way I'm going to be able to magically figure out how to identify the data for the response event." But as usual, Qlik Sense provides some very easy transformations. Logically we want to only pull the data from the entire event stream, if the event before it is "event: response" so that's exactly what we ask to do by using the Previous() function. We don't care at all about the part of the column that has the phrase "data: " in it, so we simply throw it away. 

Dalton_Ruer_10-1762290493953.png

Go ahead and reload the data, now that this section will run and when it completes check the preview window and sure enough ... we have exactly what we want in our JSON table.

Dalton_Ruer_11-1762290520070.png

Content

If you look at the prettified view of the response data you will see that at the highest level it contains a field called content that is an array. 

Dalton_Ruer_13-1762290925681.png

If you scroll all the way the pretty content you will see that it's actual an array of heterogeneous, or mixed type, objects. Meaning some of the array elements are thinking, some are tool_use, some are tool_result. And to make it worse the tool_result elements aren't even the same. 

Dalton_Ruer_14-1762290986818.png

If that sounds nasty ... don't let it bother you. Again, the entire reason for that series of posts I've already written was to help walk you through all of the types of JSON data that will need to be parsed. To understand the next part of the code be sure to read as well as the posts it points you to.

Parsing: Heterogeneous (Mixed) JSON Arrays

Go back to the load script and move the "Mark the Content we care about" section above the Exit script section and reload the data. 

Dalton_Ruer_15-1762291279928.png

Before I discuss the code, go ahead and preview the Content table to ensure you have the 9 different Content values that were in the array. One of the tool_use rows will have the Question_Record marked to yes and the tool_result record will have the Results_Record marked to Yes. 

Dalton_Ruer_16-1762291572028.png

Logically we do a 2 part load. The first iterates through all of the elements in the content array and pulls out just that elements content. The preceding load that takes place simply uses an Index function to know if the word "charts" is contained in the record and marks a flag accordingly. If we parse a nested set of JSON values from the record and find the question value, then we set that flag accordingly. If you haven't already read the posts I've been begging you to read ... then stop and read them now. That's an order. 😉

Dalton_Ruer_17-1762291736940.png

Why that Content?

The logical question you might have at this point, since I left you hanging, is why in the world I focused on flagging those particular rows of the content array? To understand imagine the end user asking a question of a magical black box that mysteriously just goes off and returns an answer. You've probably heard me say more than once "you can't act on data that you don't trust."

To that end, the Snowflake Cortex Agent API will return the question, as it got rephrased by it's generative AI and it also sends the result SQL that was generated. We just have to look for it in the pretty version of the response. Suddenly the mysterious black box, becomes more transparent. Which is exactly what I want to do ... report to the end user as well as audit the question and the sql. 

Dalton_Ruer_0-1762292675016.png

The results flag is set because the Snowflake Cortex Agent API literally hands us the information we need to create a chart with the results. I'm not kidding. It literally gives us the title for the chart as well as the dimension and measure fields for the chart, then it gives us the values for them. You gotta love that. 

Dalton_Ruer_1-1762292704633.png

Question and Results

Now that you understand what is returned and why I flagged it, let's look at how we pull all of that wonderful information out of what initially seemed like an undecipherable JSON mess. Go back to the load script editor and move the "Get the Question and the Results" section above the "Exit script" section and then reload the data.

Dalton_Ruer_0-1762339099169.png

We start building the Response table by simply reading the row in the Content table that has the Question_Record flag set to Yes. Getting the Question and the SQL statement to share to the end user is simply a matter of reading the nested JSON path for their values. 

Dalton_Ruer_1-1762339202567.png

Then we need to add a few columns to the Response table, which we will get by reading the row in the Content table that has the Results_Record flag set to Yes. Again pulling the information we want is simply a matter of reading the nested JSON path for those values. 

Dalton_Ruer_2-1762339381115.png

 

 

Now that you have reloaded the data to run this, and understand it ... it is time to check out the Preview panel for the Response table. We almost have exactly what we need to present to the end user. We have the Question_Asked, the SQL that was used within the Snowflake Cortex Agent, and we know the Dimension and Measure field names. Finally we have a JSON array of the values

Dalton_Ruer_3-1762339736290.png

Parse out the Values

Now that I know you have read the posts I mentioned, I should be more precise: "Finally we have a JSON Array of Homogeneous Objects." Which is covered in the Parsing: JSON Array (Homogeneous Objects) post

Go back to the load script editor and simply drag the "Parse out the Values" section above the "Exit script" section and reload the data

Dalton_Ruer_0-1762340517311.png

The first thing we need to do is pull the DimensionField and MeasureField names into variables that we can refer to. All we need to do is use the Peek() function. 

Dalton_Ruer_1-1762340625593.png

As you are familiar by now parsing a nested JSON structure is a simple matter of using the JsonGet() function with a pattern of:

  • JsonGet(JsonStructure, '/field name')
  • JsonGet(JsonStructure, '/entity/field name')
  • JsonGet(JsonStructure, '/entity/entity/entity/field name')

Which is straightforward when you know the "field name." In the case of pulling the values out that we will need to present to the end user we don't know what they are. The very nature of what we are doing is asking the Snowflake Cortex Agent a question that the end user will give to us. It will then magically process that question and respond to us. Which is why we needed to extract the field names to variables. Now we simply iterate the array and parse the values by passing the variables.

Dalton_Ruer_2-1762341129925.png

Structured Data

Now let's check the work by looking at the preview for the Values table we just created. Unless I'm missing something you just rocked the world by converting an event stream of JSON structures into a structured table that is now contained in memory. 

Dalton_Ruer_3-1762341208287.png

How cool would it be if ... Never mind that's crazy!

But it would be cool if we could ... It would be really hard. 

Maybe we could so let's talk about it. 

Since we do have this data in memory now it would be so cool if we could visualize on the screen for the end user. Right? Forget my event stream of consciousness in getting here .. but it's not really that hard. Go ahead and go to the "View the Results" sheet and you will see something magical. 

Dalton_Ruer_0-1762342161274.png

Go ahead and edit the sheet so that you can see how I created that bar chart. Check that out I simply used those 2 variables that we created and did that hocus-pocus dollar sign expansion magic on them. You gotta love that. 

Dalton_Ruer_1-1762342350296.png

 

Dalton_Ruer_2-1762342379292.png

Art of the Possible

Of course I am going to load data into Qlik Sense so that business users can gain insights at the speed of thought. But let's face it ... I'm lazy and didn't talk to every user about everything they would ever want to know about their data. As a result I didn't build every conceivable chart to show them the answers.

Invoking the Snowflake Cortex Agent lets the users ask questions. Questions that we might not have a chart for yet. Questions that might involve scenarios that would be over the users data literacy or training level to get at. 

Oh sure I had fun doing all of the techno mumbo jumbo and sharing that with you. But by invoking it right inside a Qlik Sense dashboard I've now given business users the best of both worlds. Not only can we present their answer to them in a chart, since the values are in-memory it is associated to all of the other data. Meaning business users can interact with the values, all the other visuals will respond and naturally we can take advantage of that green/grey/white experience. While also providing the ability for them to see the aggregated answer they were looking for, but also immediately allowing them to see all of the other details they may need to follow up. Like details of the visits, provider names etc. 

ArtOfThePossibleCortex.gif

 

 

 

 

 

 

 

Labels (2)
Contributors
Version history
Last update:
8 hours ago
Updated by: