Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
If you are a Snowflake customer you have probably seen the left side of this image frequently. Snowflake Intelligence is legit cool and you've dreamed of ways for it to impact your business.
If you are a Qlik and Snowflake customer you have probably seen the left side of this image frequently, and thought "Wow I sure wish I could take advantage of Snowflake Intelligence within my Qlik environment to impact my business. Feel free to do your celebration dance because this post is designed to walk you through how Qlik can work with Snowflake Cortex AISQL as well as Snowflake Cortex Agents (API).
Both series are designed in 3 part Show Me format. The first video for each will frame the value you can attain. The second video will help you drool as you begin imagining your business implementing the solution. Finally I conclude each series for those that get tapped on their shoulder to actually make the solutions work.
In this comprehensive three-part series exploring the integration of Qlik and Snowflake Cortex AI-SQL, I guide viewers from executive vision to hands-on implementation. While demonstrating how organizations can democratize AI capabilities across their entire analytics ecosystem—without requiring data science expertise.
This series demonstrates how to combine Qlik's associative analytics engine with Snowflake's AI-powered semantic intelligence to transform natural language questions into interactive, fully contextualized insights.
Heck yeah we've got both covered.
Healthcare Synthetic Data Set -> Semantic View -> Build Qlik Sense Application through Claude and Qlik MCP - This demo begins by pulling the information out of a Semantic View for the shared Healthcare Synthetic data set. Huge tables. Constructs the code to load the tables into Qlik Sense including concatenating the Patient and Encounters fact tables and creating concatenated keys for the dimensional tables. What about all of that wonderful metadata about the fields? Yeah we pull that in as well because governance is important. Then we build Master Dimensions for all of the fields with that as well, including the sample values. Now data modelers/designers can see the data and end users can see it all so they know they can trust the answers and act on them. Chef Qlik Dork and Chef Claude were really cooking in the kitchen for this one.
PS - This was the beginning of the application. See Video 5 - Show me How It Works above to see the final application and how it interacts with Snowflake Cortex Agent API for the full end user experience of awesomeness. I'm talking about the results of questions being displayed as charts, tables and users can see the SQL that was generated. The data returned is ingested into the Qlik Data Model so users can then filter to the records returned and see all of the details to answer their next 10 questions. What if they asked about big data tables that aren't loaded into Qlik Sense? No problem we go pull that data live.
As you know by now MCP servers are essentially invisible. They provide super human, highly performant tasks, but they a visual host. While the Synthetic Healthcare video demonstration above used Claude as the user interface, now that Snowflake has officially released Coco. I mean Cortex Code. I figured I better ensure our joint partners could do their happy dance and take advantage of both of these leading edge power tools.
Previously I created a post called Creating your Secret Sauce. In which I described the process of creating and using #skills. Guess what? The same skill files that I created and shared for Claude can be imported and used directly by CoCo. You gotta be loving that.
The videos in these courses subtly demonstrate the power of using skills to enhance the prompts. My skill for Master Items ensures that their naming convention is user friendly. When I ask to create a sheet ... the skill transforms it into "let's create a story that is prepared with love" instead of microwaving random mystery charts onto a sheet just for speed.
🎥 Course 405 - Cortex Code generating Master Dimensions and Master Measures in Qlik Sense via Qlik MCP
🎥 Course 410 - Cortex Code generating a sheet inside of Qlik Sense via Qlik MCP
The Show Me How to Build It videos for both series will refer to other resources. I thought about making you crawl on your desktop and squint in order to see the URLS and then make you hand type 50 characters from memory. Then I thought it's not going to be much fun for either of us since I wouldn't actually see you doing it. Fortunately for you I've included the needed resources below.
Calling Snowflake Cortex Agent API within Qlik Sense
Creating a REST API Connection for Snowflake Cortex Agent
When working with large datasets, loading everything in a single query is rarely an option. When the source can't return all the data at once, you need to break the load into steps: by date, by region, by file. That's exactly what loops are for in Qlik Sense script. They reduce code volume, make the load process manageable, and allow you to build incremental ETL pipelines that refresh only the required data slice without overloading the source.
Qlik Sense has four types of loops, and each has its own use case.
FOR..NEXT: when the number of iterations is known in advance. The period is defined through a variable storing the difference between dates. Since the counter starts at 0, we use -1 to avoid an extra iteration:
LET vDaysCount = Date#(vEndDate,'YYYY-MM-DD')
- Date#(vStartDate,'YYYY-MM-DD');
FOR i = 0 TO $(vDaysCount) - 1
LET vDate = Date(Date#(vStartDate,'YYYY-MM-DD') + i, 'YYYY-MM-DD');
// query body
NEXT i
FOR EACH..NEXT: iterates over a list of values: regions, file names, product codes. The list can be defined explicitly or generated using functions like FileList(), DirList(), or FieldValueList():
FOR EACH vRegion IN 'UZB', 'KAZ', 'RUS'
LOAD * FROM [lib:///_$(vRegion).qvd](qvd);
NEXT vRegion
WHILE: used inside a LOAD statement together with AutoGenerate and the IterNo() function. The classic example is generating a Master Calendar:
TempCalendar:
LOAD Date($(vMinDate) + IterNo() - 1, 'YYYY-MM-DD') as tDate
AutoGenerate 1
While $(vMinDate) + IterNo() - 1 <= $(vMaxDate);
The condition is evaluated on each row via IterNo(): this is row generation inside a single LOAD statement.
DO WHILE: a loop with an updatable variable. The condition is re-evaluated before each iteration. Used for incremental loading from an external source with a date-by-date breakdown:
LET vEndDate = Date(Today(), 'YYYY-MM-DD');
LET vStartDate = Date(Today() - 7, 'YYYY-MM-DD');
LET vIterationDate = vStartDate;
DO WHILE vIterationDate <= vEndDate
// query body
WHERE date_field = toDate('$(vIterationDate)')
STORE TempOneDay INTO [lib://…/data_$(vQVDName).qvd](qvd);
DROP TABLE TempOneDay;
LET vIterationDate = Date(Date#(vIterationDate,'YYYY-MM-DD') + 1, 'YYYY-MM-DD');
LOOP
At first glance, a KPI is just a number on a dashboard. But depending on the task, cards can vary significantly in structure. I'd highlight several groups:
Today I want to focus on monitoring KPIs where you can immediately see not just the current value, but also the dynamic: delta to the previous period and a trend line.
In Qlik Sense, I build these cards using an HTML Box. It's not a separate widget it's a string expression where HTML is assembled via & concatenation, and Qlik calculations are embedded directly inside the string:
='<div style="font-family:Arial; padding:12px;">' &
'<div style="font-size:11px; color:#888;">OPERATIONS COUNT</div>' &
'<b style="font-size:28px;">' & Num(sum(cnt)/1e6, '#.##0,00') & ' M</b>' &
'<span style="color:' & If(wow > 0, '#2E8B57', '#B22222') & ';">' &
Num(wow, '+##0,0%;-##0,0%') & ' WoW' &
'</span>' &
'</div>'
The delta color is calculated dynamically with If()construction directly inside the style attribute. Writing this HTML manually isn't necessary: ChatGPT handles it well, just describe the layout you want.
Pro tip for reuse: use variables and master measures. To create a second KPI with the same design, simply copy the block and replace the master measure and variables using Find & Replace.
The card with a trend line at the bottom is built from three objects inside a Layout Container:
The result looks like a fully polished product design using only the built-in tools of Qlik Sense.
Dashboard example built in Qlik Sense showing 6 KPI card types: Monitoring (value + delta + trend line), Multi-metric (primary metric with additional metrics), Status (with background color), Comparative (two dimensions side by side), Progress (value + % of target), and Single (standalone number).
Welcome to Q Division Headquarters, Operative.
Behind these doors lies the future of AI-powered analytics. Qlik Answers isn't just another tool—it's your answer intelligence platform that lets anyone ask natural language questions of their unstructured data, their structured data, or both working together in perfect coordination.
Your mission, should you choose to accept it: Master the Q Division agent swarm architecture, earn your Field Operative certification, and deploy answer intelligence across your organization.
Inside, you'll meet our specialist agents, complete hands-on training exercises, watch live mission playbacks, and prove your tactical intelligence through the Agent Recognition Protocol.
By the time you exit these doors, you won't just understand Qlik Answers—you'll be ready to implement your (or guide others through) first deployments with confidence.
The briefing room awaits. Enter when ready, Operative.
Agent Roster
Q Division operates on what the intelligence community calls a "swarm architecture" – the industry gold standard for AI agent collaboration. Instead of relying on a single agent to handle every mission, we've assembled a specialized team where each agent excels at their specific domain. When you ask a question, our system intelligently identifies which agents have the expertise needed and orchestrates a precision handoff sequence to deliver the most accurate answer.
Think of it like a real intelligence agency: you wouldn't send the same operative to handle cryptography, field reconnaissance, AND financial analysis – you'd send specialists who work together, each completing their part of the mission before passing critical intelligence to the next agent. That's exactly what Qlik Answers does, ensuring you get enterprise-grade accuracy through expert collaboration. Meet the agents who'll be working your missions:
Operation: Swarm Intelligence - Agent Dossiers ►
Welcome to active duty, Operative. In this section, you'll receive the same intelligence assets that Q Division uses in live operations: a fully configured Qlik Answers application, pre-loaded knowledge bases, and the Answer Assistant framework that orchestrates our agent swarm. This isn't a simulation – these are production-grade materials that you'll download, deploy, and interrogate with real questions. You'll see firsthand how questions flow through the agent network, learn to craft queries that leverage each agent's expertise, and build the muscle memory needed to guide others through their first Qlik Answers deployment. By the end of these exercises, you won't just understand the theory – you'll have hands-on experience running actual missions.
Unstructured Data
🎯 Q Division Field Training: Module 2 - Application Documentation - Building another Knowledge Base and enhancing your Assistant wit the additional knowledge.
🎯 Q Division Field Training: Module 3 - Expense Statements - Building an enterprise grade Knowledge Base with Advanced Chunking and enhancing your Assistant with the additional knowledge
Structured Data
🎯 Q Division Field Training: Module 100 - Uploading Q Division Operations Application - Asking Answers
Unstructured + Structured
🎯 Q Division Field Training: Module 200 - Dashboard Inconsistency - Asking Questions of Unstructured and Structured Data in your Assistant
Field Operative, it's time to see the agents in action. In this section, you'll watch live mission recordings where real questions trigger the full agent swarm workflow. On one side of your screen, you'll see the complete Qlik Answers solution being constructed in real-time. On the other, you'll see which Answer Agent is currently on mission – giving you a visual understanding of who does what, when they're called into action, and how they hand off intelligence to the next specialist in the sequence. Here's where it gets powerful: since you already have the same materials from Field Training, you can run these exact same questions in your own Qlik Answers environment and watch your agents work the mission alongside mine. These aren't just recordings to watch passively – they're your playbook for getting comfortable with the agent workflow before you guide others through it.
| Mission Playback - Review 1 - Counter Terrorism | In this Q Division mission playback, an auditor spots a suspicious English-Russian dictionary expense on case 1006—a Swedish healthcare analytics mission that had nothing to do with Russian operations. Using Qlik Answers, the auditor conducts a comprehensive counter-terrorism review that reveals Agent 099 (Alec Trevelyan) purchased suspicious items including the dictionary from an oddly-named bookstore, multiple alcoholic beverages at "Cafe Pushkin" (named after a famous Russian poet), and celebrated with expensive champagne before the case was even closed. It's a masterclass in how conversational analytics can connect the dots between structured databases and unstructured documents to surface anomalies that traditional reporting would miss. |
Final assessment, Field Operative. Before you earn your Clearance Level certification, you need to prove you can recognize which agents handle which intelligence requests. We'll present you with real-world questions – the kind partners and customers will actually ask – and you'll identify which Answer Agent(s) will be deployed on the mission. This isn't about memorizing definitions; it's about developing the tactical instinct to know instantly: "That's a Data Agent question," or "This one needs both Knowledge and Visualization working together." Pass the Agent Recognition Protocol, and you'll have earned more than a certification – you'll have the operational confidence to guide anyone through their first Qlik Answers deployment
Take the Agent Recognition Protocol Test
Operatives, this is Dork 007 Dork reporting from Q Division headquarters. We have a situation that keeps every intelligence analyst up at night: Numbers that don't add up.
THE SITUATION: The original programmer for our Q Division application has left the company. A talented new designer built a beautiful dashboard, but there's a critical problem: Active Cases (16) + Closed Cases (15) = 31... but Total Cases shows 50. In an intelligence agency where no mission can be left behind, this kind of inconsistency is unacceptable.
YOUR MISSION: Use Qlik Answers to investigate the application and notes from the original programmer. In other words Structured Data -AND- Unstructured Data
DELIVERABLE: A dashboard that accurately reflects all case statuses with proper reconciliation, plus an understanding of how to use AI to diagnose and fix data model inconsistencies.
What You'll Need:
Video Intelligence Briefing: 🎥 Watch the Full Mission Walkthrough
Navigate to your Q Division Field Academy catalog and locate your Field Training Assistant. This time, instead of adding another knowledge base, we're going to add structured data from your application itself.
Click to add content to your assistant, but instead of selecting a knowledge base, choose your Q Division application. Make sure you're in your Q Division Field Academy space and select the application you uploaded in Mission 100.
Pro tip: Once configured, your assistant can now query both unstructured data (knowledge bases with PDFs) AND structured data (your application's data model). This hybrid capability is what makes Qlik Answers so powerful for investigative work.
Now comes the moment of truth. Ask your assistant:
"Please tell me the total number of cases we have, the total closed cases we have, and the total open cases we have. Can you help me understand why there seems to be an inconsistency with the totals? Because what I see in my application seems wrong. The programmer's no longer with our company."
Watch the multi-agent collaboration unfold in the reasoning panel:
⚠️ FIELD NOTE: Pay close attention to this workflow - it's the heart of how Qlik Answers investigates data issues.
The Answer Agent reveals the mystery: there's a hidden "Resolved" status in your data model containing 19 cases!
The complete breakdown:
The Explanation: "Resolved" represents cases where agents completed their work, but haven't yet received client confirmation to officially close them. It's essentially a holding status between active work and final closure.
The assistant even suggests intelligent follow-up questions like "What is the breakdown of the 19 resolved cases by priority level?" or "How many days on average do cases spend in resolved status before closing?"
Now that you understand the problem, ask your assistant to fix it:
"Can you add a KPI to my dashboard to show me the number of resolved cases so that our totals will look correct?"
Watch another beautiful agent collaboration:
Return to your Q Division application. The AI has created a new sheet called "Case Status Dashboard" with the Resolved Cases KPI displaying the value 19.
Important: Since your original dashboard is a public sheet, the AI created a new private sheet rather than modifying the public one. To complete the mission, you'll need to manually:
Now your dashboard shows: Closed (15) + Resolved (19) + Active (16) = Total (50) ✓
What You've Accomplished:
✓ Connected an application to your Qlik Answers assistant
✓ Used Qlik Answers to diagnose a dashboard inconsistency using programmer documentation
✓ Discovered a hidden status field causing reconciliation issues
✓ Generated a new KPI visualization through natural language
✓ Understood multi-agent workflows combining semantic search, data analysis, and dashboard authoring
Validation Check: Can your dashboard now reconcile to 50 total cases with all statuses visible? If yes, mission accomplished!
Challenge Exercise (Optional): Ask your assistant to build a text object for your dashboard that explains the case status values.
For this mission, the Qlik Answers Assistant needed access to both the structured application and it's data as well as the unstructured programmer notes.
This mission showcased all of the agentic agents that Qlik Answers uses:
Answers Agent: Orchestrates the entire process: Decomposes the questions users ask into sub-tasks, and presents final findings back to users.
Knowledge Agent: Searches through vector database to get relevant unstructured data
Semantic Agent: Expert at understanding your data model's structure, field names, and relationships. It's like a data dictionary that speaks human language.
Data Analyst Agent: Designs analysis packages that specify what data to use, how to analyze it and what output best answers the question(s).
Chart Agent: Specializes in visualization design after receiving the analysis package from the data analyst agent.
Dashboard Authoring Agent: Consumes outputs from the answer agent, data analyst agent or chart agent, investigates if existing required charts already exist.
Each agent is optimized for its specific task. When they collaborate, you get both depth and accuracy.
The scenario in this mission - programmer leaves, new designer inherits an application, numbers don't reconcile - happens constantly in enterprises. Traditional approaches require:
With Qlik Answers, you simply ask what's wrong and let the AI investigate the data model itself. The time savings are enormous, but more importantly, you don't lose critical business logic when people change roles.
This specific scenario - a holding status between "active" and "closed" - appears in many business processes:
These intermediate states are often invisible in executive dashboards, leading to the exact reconciliation problems we solved in this mission. The pattern repeats across industries because workflows are rarely binary (open/closed) - they have nuanced intermediate states that matter for operations but get overlooked in reporting.
You've successfully completed Field Training Mission 200 and learned how to use Qlik Answers as your data detective. As I mentioned in the video, I honestly never suspected a day when AI could auto-transcribe programmer notes, let alone use those notes alongside the application itself to solve data mysteries. But here we are, operatives. The future of analytics isn't just faster dashboards - it's intelligent systems that understand your data as well as the people who built it.
Questions? Feedback? Have you ever encountered a situation like this in the past where you had to take over maintenance of an application and something didn't add up?
In a previous post called Calling Snowflake's Cortex Agent API, I started the post by saying the focus of the post was about how the REST API worked not how to create the connection to it. This post is focused solely on how to create a REST API connector for Snowflake Cortex Agents.
Snowflake allows you to build "agents." While I have built several, this post is going to focus on one that I called "SynthAgent."
They also provide a way you for you to question them directly in their user interface. I can simply ask a question like "Who are the top 10 admitting providers?" and voila ... I get an answer:
Actually as my previous post indicated, when you talk to an agent you actually get to see it's entire stream of thought.
While asking an isolated question inside of Snowflake is certainly handy-dandy, the bigger picture is in the fact that in the world we live in, agents should play nice with others. So, Snowflake created an API that allows others outside of the Snowflake user interface to speak with their data.
Obviously you will need to use your own server and your own agents, but to help you understand the URL I wanted to show you the URL I will use in this post for this Cortex Agent API. Refer to the first image to see the database/schema/agent:
https://AYFR...mputing.com/api/v2/databases/SNOWFLAKE_INTELLIGENCE/schemas/AGENTS/agents/SYNTHAGENT:run
While it's a rather complicated URL path, it is in fact nothing more than a URL path. Which is the most important thing that the Qlik Rest API Connector needs. Begin by creating a new connection. Be sure and choose the correct space where you want this new connector created and simply choose "REST".
Paste in the correct URL and choose the POST method:
After choosing POST as the method, the connector is going to prompt you for the BODY of the call.
My agent is built from a Snowflake Semantic View. Based on that, here is the sample body that I passed to my SYNTHAGENT URL:
{
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Who are the top 10 admitting providers?"
}
]
}
],
"tool_choice": {
"type": "auto"
}
}
Notice that I'm asking the exact same question that I asked directly in Snowflake so that I can test the results I get back to ensure it is working.
Based on the Snowflake Cortex Agent API documentation, you will need to configure a parameter and a few headers that are passed when calling the URL.
You need to have a parameter named Response-Type with the value text/event-stream
You need to create 4 Query Headers.
Authorization - This is to define the security for the API call. Snowflake provides multiple types of security. In this example I'm using a token model. To create your API Token, called Programmatic access token (PAT) in Snowflake you simply go to your profile, go to Settings and choose Authentication. Then Generate new token.
When you create the PAT, simply copy the value and in the value field type Bearer and then paste in your PAT.
User-Agent use QlikSense.
Accept simply use */*
Content-Type use application/json
You had to hard code your question in the body, but in the real world in which you live are you always going to make all users, always see answers for just that question?"
This is when you need to reply out loud in your biggest voice "OF COURSE NOT!!!!"
Then I would say "So check that box that says Enable Dynamic Parameters" or any of the alternatives shown:
Because any of those check boxes would make total sense to you "Oh I need to check the box that allows me to change the question based on what my user asks."
Instead I need you check the box that only makes sense, after you know what it does. It's the check box that says Allow "WITH CONNECTION"
It will make sense in a few minutes why the button has that name. For now just now it relates to any of the 3 button names I wish it had. 😁
Finally give your connection a name and press the Test connection button.
If all of your security is configured correctly in Snowflake (Click here to see all of the things you need to ensure in Snowflake itself.) then you will get a Connection succeeded result.
If you see an error like "Unsupported Accept header null is specified. It simply means that you followed instructions from some other post and left out the Accept */* that I showed above an retest the connection.
Once you see the Connection succeeded save your connection so that we can actually make the call to the agent and see how the results look.
Congratulations you know have a connector in place to call your Snowflake Cortex Agent REST API. Woohoo. 😎
Click the Select data button for the connector
It will default to CSV for the response type which is perfect.
The Delimiter should be Tab.
Check the box for CSV Has header.
Check the CSV_source box.
Once you check that box it will go and make the call. If you see results like below you know that have in fact successfully asked your Snowflake Cortex Agent that question that was hard coded.
Press the Insert script button to actually insert the script block into your load script. Isn't this great I can reload my application and get answers to the same question over and over and over again. We are really living large. Right?
Our connector is working great, but I'm still hung up on that whole "same question over and over and over again thing." Logically what we would like to do, and more importantly can do ... is call the REST API and give it the question on the fly. In other words we would need to give it the BODY for the the call on the fly ... like this:
WITH CONNECTION is the syntax key word that is added to calls to REST API Connectors. Now I get why that checkbox that didn't make sense is called Allow WITH CONNECTION. It is letting you declare when you create the connection if the developers with access to the connection are allowed to do that or not. Glad they didn't rename them with any of the names I suggested. We would have a nightmare on our hands.
Now I have the ability to create a variable with the body, that has whatever question I want in it:
When you look at the results you will realize that the Snowflake Cortex Agent API does not just return the answer to our question. It returns an entire event stream. Which is precisely why when we set the Response-Type parameter initially, we needed to set it to text/event-stream.
Great news is you have successfully created your REST API connector so you can talk to the Snowflake Cortex Agent. The next step is to check out my previous post where I walk you through how to process the results. https://community.qlik.com/t5/Member-Articles/Calling-Snowflake-s-Cortex-Agent-API/ta-p/2535284
PS - This post was planned for a later date. Be sure to thank @chriscammers for requesting it sooner. 😃
Snowflake recently released what it calls Snowflake Intelligence. It's their User Interface that enables users to directly ask questions of data. Under the covers their interface is interacting with a new Snowflake Cortex Agent API.
Qlik is an official launch partner with Snowflake for this exciting technology as we are able to call that Snowflake Cortex Agent API just like they do. Which means you are able to present visuals to aid with insights, while at the same time allowing end users to ask questions and then present the results that the Cortex Agent API returns.
The intention of this post, is to help you understand the nuances of the Snowflake Cortex Agent API.
Calling the Agent API is super easy. You simply use the REST Connector provided to you in Qlik Sense. Refer to this post to help you create a REST connector to your Snowflake Cortex Agent. Either Qlik Sense Enterprise on Windows or Qlik Talend Cloud Analytics. You will want to ensure you check the Allow "WITH CONNECTION" box so that you can change the body.
To get the REST connector to build a block of script for you, ensure that you set the Response type to CSV and set the Delimiter to be the Tab character.
Eventually you will modify your script to be something like the following where you set the Body to be the question your user wants to ask, rather than it be hardcoded. But who cares?
There is nothing special here and nothing worth writing about that I haven't already covered in other posts. The reason for this post isn't about the connection itself... it's about what the Snowflake Cortex Agent API returns.
Rather than returning a single response, it actually streams a series of events. Notice in the image above to load data from the connection what the results look like. It literally returns the entire "stream of consciousness" if you will, as it is working. Everything it does.
It would be an exercise in futility if I simply talked my way through how to handle a Streaming API in general, and especially how to handle this even stream from Snowflake Cortex Agent. So, while I won't be walking you through all elements of the connection, or how I build the body based on what the user asks as a question ... I do want you to be able to be hands on. The following image illustrates how I used my connection (that does work) to get the event stream and store into the QVD file that is attached to this post.
You will need to:
If you did all of these steps correctly you should be told that 487 rows of data were loaded.
Go to the Event Stream sheet and see all of those wonderful 487 rows that were returned when I called the Snowflake Cortex Agent with the question that I passed it.
Be sure and scroll through all of the rows to really appreciate how much information is returned. When you get to the bottom there are 2 rows that I really want you to focus on. You see all of the other events are simply appetizers for the main course we will focus on for the remainder of this post. They are merely events that let you know things are happening and then the stream says "Hey wake up now here is my official response" in rows 482 and 483.
Now what you need to do is row 483 so that the text box on the right will show you the full value that is returned for the response event.
I'm not going to lie ... the first time I saw that I was a little bit intimidated. Sure seemed to me like the wild west of JSON data. In fact ... I ended up writing a series of posts I called Taming the Wild JSON Data Frontier just to document the process I had to go through in parsing that beast. Be sure you read each of the posts that is part of this so that you have the chops as a data sheriff to deal with this incredible structure.
One thing you should know, if you don't already, is that JSON can be very compact like you see in the response. Which is great for exchanging/storing all of the data. But it is really really hard to understand. I highly recommend you take advantage of any online JSON Formatters that you can find. I use jsonformatter.org. You simply hand it the compact JSON structure, and ask it to format/beautify it .. and voila it becomes much more human readable.
{ I have attached the output in a text file that you can download and view for the remainder of this article if you don't want to take the time right now to actually copy and the beautify the response. }
But I digress the important part is that you now know that the RESPONSE event is the one you care about and that the DATA associated with the RESPONSE has a massive JSON structure that contains all of the information we need to present the response back to the user. So, let's dig in.
Go ahead and return to the load script editor and move the section named "Get the RESPONSE event data" up above the Exit script section so that it can actually be processed.
Before seeing the code you may have thought "There is no way I'm going to be able to magically figure out how to identify the data for the response event." But as usual, Qlik Sense provides some very easy transformations. Logically we want to only pull the data from the entire event stream, if the event before it is "event: response" so that's exactly what we ask to do by using the Previous() function. We don't care at all about the part of the column that has the phrase "data: " in it, so we simply throw it away.
Go ahead and reload the data, now that this section will run and when it completes check the preview window and sure enough ... we have exactly what we want in our JSON table.
If you look at the prettified view of the response data you will see that at the highest level it contains a field called content that is an array.
If you scroll all the way the pretty content you will see that it's actual an array of heterogeneous, or mixed type, objects. Meaning some of the array elements are thinking, some are tool_use, some are tool_result. And to make it worse the tool_result elements aren't even the same.
If that sounds nasty ... don't let it bother you. Again, the entire reason for that series of posts I've already written was to help walk you through all of the types of JSON data that will need to be parsed. To understand the next part of the code be sure to read as well as the posts it points you to.
Parsing: Heterogeneous (Mixed) JSON Arrays
Go back to the load script and move the "Mark the Content we care about" section above the Exit script section and reload the data.
Before I discuss the code, go ahead and preview the Content table to ensure you have the 9 different Content values that were in the array. One of the tool_use rows will have the Question_Record marked to yes and the tool_result record will have the Results_Record marked to Yes.
Logically we do a 2 part load. The first iterates through all of the elements in the content array and pulls out just that elements content. The preceding load that takes place simply uses an Index function to know if the word "charts" is contained in the record and marks a flag accordingly. If we parse a nested set of JSON values from the record and find the question value, then we set that flag accordingly. If you haven't already read the posts I've been begging you to read ... then stop and read them now. That's an order. 😉
The logical question you might have at this point, since I left you hanging, is why in the world I focused on flagging those particular rows of the content array? To understand imagine the end user asking a question of a magical black box that mysteriously just goes off and returns an answer. You've probably heard me say more than once "you can't act on data that you don't trust."
To that end, the Snowflake Cortex Agent API will return the question, as it got rephrased by it's generative AI and it also sends the result SQL that was generated. We just have to look for it in the pretty version of the response. Suddenly the mysterious black box, becomes more transparent. Which is exactly what I want to do ... report to the end user as well as audit the question and the sql.
The results flag is set because the Snowflake Cortex Agent API literally hands us the information we need to create a chart with the results. I'm not kidding. It literally gives us the title for the chart as well as the dimension and measure fields for the chart, then it gives us the values for them. You gotta love that.
Now that you understand what is returned and why I flagged it, let's look at how we pull all of that wonderful information out of what initially seemed like an undecipherable JSON mess. Go back to the load script editor and move the "Get the Question and the Results" section above the "Exit script" section and then reload the data.
We start building the Response table by simply reading the row in the Content table that has the Question_Record flag set to Yes. Getting the Question and the SQL statement to share to the end user is simply a matter of reading the nested JSON path for their values.
Then we need to add a few columns to the Response table, which we will get by reading the row in the Content table that has the Results_Record flag set to Yes. Again pulling the information we want is simply a matter of reading the nested JSON path for those values.
Now that you have reloaded the data to run this, and understand it ... it is time to check out the Preview panel for the Response table. We almost have exactly what we need to present to the end user. We have the Question_Asked, the SQL that was used within the Snowflake Cortex Agent, and we know the Dimension and Measure field names. Finally we have a JSON array of the values.
Now that I know you have read the posts I mentioned, I should be more precise: "Finally we have a JSON Array of Homogeneous Objects." Which is covered in the Parsing: JSON Array (Homogeneous Objects) post.
Go back to the load script editor and simply drag the "Parse out the Values" section above the "Exit script" section and reload the data.
The first thing we need to do is pull the DimensionField and MeasureField names into variables that we can refer to. All we need to do is use the Peek() function.
As you are familiar by now parsing a nested JSON structure is a simple matter of using the JsonGet() function with a pattern of:
Which is straightforward when you know the "field name." In the case of pulling the values out that we will need to present to the end user we don't know what they are. The very nature of what we are doing is asking the Snowflake Cortex Agent a question that the end user will give to us. It will then magically process that question and respond to us. Which is why we needed to extract the field names to variables. Now we simply iterate the array and parse the values by passing the variables.
Now let's check the work by looking at the preview for the Values table we just created. Unless I'm missing something you just rocked the world by converting an event stream of JSON structures into a structured table that is now contained in memory.
How cool would it be if ... Never mind that's crazy!
But it would be cool if we could ... It would be really hard.
Maybe we could so let's talk about it.
Since we do have this data in memory now it would be so cool if we could visualize on the screen for the end user. Right? Forget my event stream of consciousness in getting here .. but it's not really that hard. Go ahead and go to the "View the Results" sheet and you will see something magical.
Go ahead and edit the sheet so that you can see how I created that bar chart. Check that out I simply used those 2 variables that we created and did that hocus-pocus dollar sign expansion magic on them. You gotta love that.
Of course I am going to load data into Qlik Sense so that business users can gain insights at the speed of thought. But let's face it ... I'm lazy and didn't talk to every user about everything they would ever want to know about their data. As a result I didn't build every conceivable chart to show them the answers.
Invoking the Snowflake Cortex Agent lets the users ask questions. Questions that we might not have a chart for yet. Questions that might involve scenarios that would be over the users data literacy or training level to get at.
Oh sure I had fun doing all of the techno mumbo jumbo and sharing that with you. But by invoking it right inside a Qlik Sense dashboard I've now given business users the best of both worlds. Not only can we present their answer to them in a chart, since the values are in-memory it is associated to all of the other data. Meaning business users can interact with the values, all the other visuals will respond and naturally we can take advantage of that green/grey/white experience. While also providing the ability for them to see the aggregated answer they were looking for, but also immediately allowing them to see all of the other details they may need to follow up. Like details of the visits, provider names etc.
Use Case: We usually update the following in the QMC GUI. This causes problem at times when there is no RDP access is present to the server.
Steps do it in automated way without QMC GUI:
SELECT * from "Users";-----To know all the users
SELECT "SslBrowserCertificateThumbprint" from "ProxyServiceSettings";
SELECT "WebsocketCrossOriginWhiteListString" from "VirtualProxyConfigs";
The information in this article is provided as-is and will be used at your discretion. Depending on the tool(s) used, customization(s), and/or other factors, ongoing support on the solution below may not be provided by Qlik Support.
Environment
The image above depicts a JSON Version 2 structured array. Rather than repeating the column/field names over and over in pairs with the data, they present the fields and the data separately.
When I first encountered this structure, I asked myself "How in the world am I supposed to read the data into a structured placeholder?" I couldn't find any type of JsonGet example where it said "read this data array and just use your esp to know what the field names are supposed to be.
With any problem like this where the answer doesn't seem obvious, my recommendation is to just get started with what you can achieve ... so I did. I started by separating out the information that I could with the basic JsonGet field/value pair syntax.
After that I had the information that would be needed, broken up into digestible pieces.
Then I started with the field names. Notice that it is simply a JSON Array of Homogenous Objects and we already know how to deal with those by sprinkling on a little of the Qlik iteration magic:
Voila ... we have a table of the FieldValues so we know the names of each of our columns.
That was the easy part. I still wasn't sure how in the world I create a table where those were the columns and the data values would be the ... well ... the data values. So, I tried to visualize what I was looking for by adding the following as comments in the code itself as a reminder/muse.
SHERIFF_BADGE, TOTAL_BULLETS_USED
1307919, 3221
1617792, 2690
Then a crazy notion hit me ... that looks exactly like what I would do for an INLINE table.
EXACTLY LIKE AN INLINE TABLE.
So .... why not build it as an inline table?
What I wanted was something like this ...
Obviously I needed to build the Header variable first. A little housekeeping first to set some variables then I just needed to loop through however many columns/fields the structured array might have. In my real case, the number of fields was more than 2, but I shortened them to help you track the solution. The logic works regardless of the number. If it is the first field, then set the vCortexHeader to the name of the first field. If it is not the first field, then update the vCortexHeader so it equals the previous value, and add a comma, then add the name of the next field.
Voila ... the header for my soon to be inline table of values.
At the beginning of this post I showed what the ResultData looked like as part of the overall Response table that was constructed using the simple JsonGet function. I've expanded here so you can focus on just it ... notice that other than some extraneous characters it is literally in the format we need for an INLINE table.
If you take out the open/close square brackets "[" "]" and the double quotes ... the data is right there. We already know from previous posts how to use the JsonGet function to get row 1 (which is 0 offset) and get row x ....
So, then it is just a matter of removing those double quotes and square brackets:
Putting it all together we pretty much do what we did with the field names, except this time we added carriage return line feed characters before rows 2 through x:
And we ended up a vDataValues variable that looks like this:
If you are anything like me you don't like to assume anything, and you are a visual or experiential learner. So, go ahead and download the attached WildDataFrontier.qvf that is attached, upload it to your environment and open the load script.
{ Notice there are multiple sections. Each of them will pertain to a separate article and for this article the section named "7 - Structured Array" is the one you want to have at the top of your script. }
Edit line 49 that had 2 data values and a third data value , ("7777777", "777") and modify the numRows value from 2 to 3 like this:
'{"table": {"result_set": { "data": (("1307919","3221"), ("1617792","2690"), ("7777777", "777")), "resultSetMetaData": {"format": "jsonv2","numRows": 3, "rowType": ({"length": 0,"name": "SHERIFF_BADGE","nullable": true,"precision": 38,"scale": 0,"type": "fixed"},{"length": 0, "name": "TOTAL_BULLETS_USED", "nullable": false,"precision": 18, "scale": 0, "type": "fixed"} ) }, "title": "Top 2 Sheriffs for number of bullets used"} }}'
Since this solution involved a lot of variables, feel free to use the Debug mode and set breakpoints where you want them so that you can see the values as they are set. Or simply just reload the data after your changes and check out the values in each of the tables created so you can confirm what was done.
If you look closely the following image is very similar to the image above that I used for the Heterogeneous (Mixed) JSON Objects post. The difference is that instead of being multiple rows, it's a single JSON block which contains multiple rows as an array.
It also looks very close to the image I used for the JSON Arrays (Homogeneous Objects) post above that one. The difference is that in the case of that post, the array was easy to understand as just being multiple rows for the same entity. In this case, each of the "rows" in our array our for different entities.
Since the previous post was so long, and you definitely did the homework for both of the referenced posts I'm going to keep this one short. We are simply going to walk through how you can convert the array of data into a table of data that can be parsed in a flexible way.
If you haven't already done so for a previous post, go ahead and download the WildDataFrontier.qvf that is attached, uploaded it to your environment and open the load script.
{ Notice there are multiple sections. Each of them will pertain to a separate article and for this article the section named "6 - JSON Array: Heterogenous Mixed Objects" is the one you want to have at the top of your script for this post. }
Same basic type of inline load that you've seen in previous posts. Unlike the previous post where the data was multiple rows for a table, I've returned to simulating a single JSON block return. You can see the pretty view of the data, as well as the single row view.
Feel free to scroll right down to Step 1 in the preceding load process. You will see that all I do is use the IterNo() function to iterate the array, and simply create a table like we started with in the previous post.
If you comment out Steps 2-4 in the load script so that Step 1 is the only active you can see the results in the preview tab.
Just like we did in the previous post, feel free to simply uncomment 1 Step at a time in the load process, until you return to having all 4 steps being active. At that point if you take a look at the preview and will see that you have in fact parsed out all the values for all of the fields for every mixed entity JSON block in the array.
This post is part of a series aimed at ensuring you have the tools needed to Tame with Wild West Data Frontier, I mean Tame the Wild JSON Data Frontier you may be facing.
Posts:
As you look at the image above you will notice multiple JSON objects. In the previous post Parsing: Heterogeneous (Mixed) JSON Objects Fixed Manner we walked through how we could easily handle this type of data in a fixed way. All you need to do is know every single field:value pair that will ever come to you.
If you haven't already read that previous post be sure to do so and go through the practice. You need to understand that while easy to maintain for new fields that come across ... the thought of knowing every single field:value pair that will ever come to you is kind of daunting. Right?
I kind of slipped it in there but:
This post will walk you through the same data that we dealt with in the previous post, but will allow you to read all of the values for all fields that exist now, and those that might come riding in to town tomorrow.
If you haven't already done so for a previous post, go ahead and download the WildDataFrontier.qvf that is attached, uploaded it to your environment and open the load script.
{ Notice there are multiple sections. Each of them will pertain to a separate article and for this article the section named "5 - Mixed JSON: Flexible Values" is the one you want to have at the top of your script for this post. }
Same basic data as the previous post, but I've also included the 4'th row of data that I asked you to do as part of the practice. The flexibility that you need as a data sheriff in this JSON Wild Data Frontier is going to be handled by the incredible flexibility available to you in how Qlik Sense can load and transform that data.
In previous posts we've simply done a resident load to transform the data directly in 1 step using that ever so flexible JsonGet() function. In this post our transformation is going to do a few things that may be new to you. Be sure to click the links for each to get at least an introduction to them:
Once you have the application open and have moved section "5 - Mixed JSON: Flexible Values" to the top of the script go ahead and load the data. We are going to start by previewing the data for the finished product and then work backwards to understand how it was accomplished. If you look at the preview screen you will notice something odd: You have the Mixed_JSON table that was built by the inline load part of the script, then you have a whole bunch of other tables. One for each of the fields that is part of the data.
Go ahead and select the Information_Flexible.bounty table so you can preview what data is in that table.
If you walk each of the tables you will notice that each table contains the value for that given field. If you look at the Data Model Viewer you will see that every table is associated based on the row number. Which means in any chart that you visualize you will easily be able to present the values for all of the fields.
Add the following row of data to the inline load script, and then reload the data. Notice that it is brand new type and contains a value for a new field called bowtie_color, as well as values for two existing fields text and bounty.
5, '{ "type": "dork_mail", "bowtie_color": "Qlik Green", "text" : "Qlik Dork is coming to town", "bounty": 1000000 }'
What do you know? Our application is indeed flexible and it has created a new table called Information_Flexible:bowtie_color
Feel free to check out the values for it (as well as the values for the text and bounty tables) to see that indeed we have created structure from this unstructured heterogeneous JSON mess.
I tried my best to include comments in the load script which will hopefully aid in your learning. But I will admit sometimes it's hard to mentally walk through a script like this, even with comments, when there are multiple preceding loads like this. So, let's take this one step at a time and walk through what happens for each of the preceding load steps. To do this highlight rows 20-43 (from the Generic Load line until before Step1) and then comment those rows out and then reload the data.
If you look at the preview window now you will see that you have a single Information_Flexible table. As described in my comments, all step 1 did was to remove the { } characters out of the JSON block and we simply have some field:value pairs.
Now uncomment lines 41-43 and reload the data so that we are running the first preceding load statement.
If you look at the preview for the Information_Flexible table you will see something interesting. We have multiple rows in the table for each of the original rows. As the comments indicated the SubField() function has done that. Each of our field:value pairs has been put into it's own row:
This is getting fun. So let's keep going. Now uncomment rows 31-34 and reload the data again.
As you look at the preview again for the Information_Flexible table you will notice something cool. The field:value JSON pairs, have been turned into structured Field and Value fields.
If we could created a table for each of the different Field values and store those Value values in it we would really be in business. Now I remember that's exactly what a Generic Load does.
Go ahead an uncomment lines 20-24 so that we can once again reload the data, and this time return to our starting point.
Like many things I have written in the past ... this solution isn't really all that complicated, but understanding how multiple Qlik Sense load transformations work together can be. Hopefully, you now feel confident that you have an example load script that you can use to flexibly parse out any nasty JSON Heterogeneous data that those varmints throw at you. Plus you realize how easy you can make it for yourself to simply comment out preceding load steps so that you can get a picture of what is occurring each and every step of the way. After all, not everyone documents their code so thoroughly. 🤠
This post is part of a series aimed at ensuring you have the tools needed to Tame with Wild West Data Frontier, I mean Tame the Wild JSON Data Frontier you may be facing.
Posts:
As you look at the image above you will notice multiple JSON objects. In previous articles we've dealt with single JSON objects. Now we need to deal with the fact that the file has multiple rows, or we called an API that returns as a streaming set of values, rather than just a single response.
Like previous posts these JSON objects have field:value pairs that we can certainly parse out, but each record seems to be different than the previous. There is a type field, but the value for type seems to indicate a different entity type all together.
In previous posts I tried to help you relate the unstructured JSON block to it's SQL counterpart.
When I presented Parsing: Flat JSON - Field Value Pairs we discussed how a JSON structure was similar to a single row/record of a table.
When I presented Parsing: Nested JSON Objects we discussed how a nested JSON structure was similar to a select clause that joined data from multiple tables.
When I presented Parsing: JSON Array (Homogeneous Objects) we talked about how the array structure was similar to returning multiple rows from a table, rather than a single record.
In SQL terms this would be like reading a table and if the value said report, then you would join to a table that stored values for a report. If the value said wanted_poster, then you would join to a table that stored values for a wanted_poster. If the value said telegraph, then you would join to a table that stored values for a telegraph.
Wait, that's not really how SQL works is it? You don't have columns that might be a foreign key to any of 3 different (or more) other tables. Complete random thought here ... What if this kind of flexibility is exactly what makes JSON a great data exchange format?
Enough random brainstorming ... let's get back to parsing out this data.
Qlik Sense provides a very easy to understand function called JsonGet that we will use to get the values for the fields contained in this homogeneous JSON data. It wouldn't matter to you if I asked you to tell me the text contained in the report, or if I asked you for the bounty amount for Billy the Byte's wanted_poster, or if I asked you to tell me the message contained in the telegraph. You would simply find the fields and tell me the value.
If you are anything like me I'm sure you worried/fretted/shook in your boots that the function would return an error if a value you asked for didn't exist. But stand tall data sheriff, because it will simply return null if you ask for a field in the JSON structure that doesn't exist.
Thus the easiest way for you to read all of this homogeneous data is simply call the JsonGet function for all of the field types from any of the entity types. It's that simple:
If you are anything like me you don't like to assume anything, and you are a visual or experiential learner. So, go ahead and download the attached WildDataFrontier.qvf that is attached, upload it to your environment and open the load script.
{ Notice there are multiple sections. Each of them will pertain to a separate article and for this article the section named "4 - Mixed JSON: Fixed Values" is the one you want to have at the top of your script for this post. }
Each section of the application begins with preparing the JSON object(s) we will be parsing. Previous posts loaded a single JSON block, but in this case we have a simple inline table with multiple rows. To help you more deeply understand this set of data imagine that you are reading data from a source with multiple rows, or an API keeps responding with results and each row you read you use the RowNo() function to create an ID for that row and stores it in a field called RowNumber.
Before making any changes to the data be sure to load the data for this section and check out the preview. I want to ensure that you are confident in the fact that jail cells aren't going to blow up when you call the JsonGet() function for a field:value pair and that row of data doesn't exist.
Now go ahead and add a new row to the inline table:
4, '{ "type": "homestead", "coordinates": "Some Where", "amount" : 7777 }'
Before actually adding any new JsonGet() function calls in the script just go ahead and load the data so that you can see that it has no problem reading your new row and will display the homestead type in the preview window for you. Now go ahead and add the 2 lines of code you need to handle the 2 new fields for the homestead type: coordinates and amount.
This post is part of a series aimed at ensuring you have the tools needed to Tame with Wild West Data Frontier, I mean Tame the Wild JSON Data Frontier you may be facing.
Posts:
The image depicts what is called a JSON Array and to be precise its a JSON Array of Homogeneous Objects. I'm sure you are no greenhorn at this point in my series on parsing JSON. If I said tell me the name who is the owner of the third saloon in our town, I'm sure you would immediately reply "Miss Ada."
This post is going to focus on how you came to your answer. Guess what function Qlik Sense provides that will allow you to parse that answer out of this unstructured textual array just like you were able to do with your human mind?
If you guessed it was the same JsonGet() function covered in the previous posts on Parsing: Flat JSON - Field Value Pairs and Parsing: Nested JSON Objects you are 100% correct.
The how is where it gets interesting. In the post Parsing: Nested JSON Objects, I brought up the concept that we needed to qualify the path to our field:value pair and we worked through how accessing the name field for an entity called sheriff was accomplished by doing this:
I told you that the notation was essentially /entity/field. But you somehow managed to astound me with your ability to parse the opening image and provide the answer to my question about the third saloon. You were able to answer because you automatically, in your mind, altered that notation to be something like /entity/INDEX/field.
The essential part of this post is the fact that the INDEX value starts at an offset of 0. Meaning the first value has an index of 0.
Assume that the JSON structure from the image at the start of the post is inside a field called Array that is in a table called JSON_Array. The following code would pull out values from the array. Notice that we pull the name, owner and capacity field values for the first saloon. We only pull the name for the second saloon. The last line simply pulls the entire JSON block, the "row" itself for the final saloon in our array. Maybe it will deal with parsing it later. It's entirely possible that I purposely did that in this basic post to prepare for a much much deeper post in the future where that is in fact what I do. I pull out the block and deal with it in a different part of the code.
If you are anything like me you don't like to assume anything, and you are a visual or experiential learner. So, go ahead and download the attached WildDataFrontier.qvf that is attached, upload it to your environment and open the load script.
{ Notice there are multiple sections. Each of them will pertain to a separate article and for this article the section named "3 - JSON Array: Homogeneous Objects" is the one you want to have at the top of your script for this post. }
Each section will begin with a common pattern. I will show you the prettified version of the JSON structure. Meaning it's indented, and thus easy to read. Then I will remove the line feeds so that the JSON structure being covered is a single line that we deal with as a single string for transformation within Qlik Sense. That single text/block is what API's will return or what you will read out of textual fields that contain JSON blocks from RDMBS systems. In the other articles it was easy for me to simply create an inline table that contained that JSON Block. But for an array we have a special issue to deal with ... those brackets the define the beginning and end of the array "[" "]"
Those 2 characters are special and define the beginning and end of our inline statement and thus they cause a problem. Notice that I've simply duplicated the characters [[ and ]] in my line of code, so that the inline table reads it in, and then I do a preceding load to pull the duplicate brackets right back out.
Before you worry about making any changes, simply load the script as is and go to the Preview window and see the JSON_Array table. You will then see that our Array field is formatted as needed to be a proper JSON Array.
When I began the article I asked you to focus on the third saloon. Then I showed you the code needed to parse out and get the same response from the code that you were able to do mentally. 3 entries fit my example case for this post because ... I needed to parse out the array 3 different ways. I wanted all values for a "row", only 1 value for a "row" and just the JSON for the final "row."
But I don't want to leave you hanging there. Some rascally gunslinger is bound to head into your town with 5 rows. Or 10 rows. Or 20 rows. The last thing I want you to do is sit there and hand code a hard coded limit and then tell the good folks in your town that as Data Sheriff you will happily give them insights from the first 20 values. They just might run you out of town.
Never fear my friend, Qlik Sense provides a fantastic function call that will actually iterate through as many rows as needed. The function is called IterNo() and as the help makes really clear, the only useful purpose for the IterNo() function is when used with a While condition. In other words ... you need a way to ensure we end the recursion.
So here is how we can utilize it to parse our JSON Array of Homogeneous Objects. Rather than just reading the resident JSON_Array like we did above when hardcoding the parsing, we simply take advantage of the IterNo() function and then combine it with another Qlik Sense function called IsJson().
Resident JSON_Array
WHILE IsJson(JsonGet(Array, '/saloons/' & (IterNo()-1)));
The IterNo() function will start at 1 and then keep iterating, until you tell it to stop. Easy breasy.
The IsJson() function will say "Yep you have properly a properly formatted JSON structure" or "Sorry partner you drew a bad hand."
Thus ... as the IterNo() function iterates through we will be handling the following:
The first 3 iterations will succeed, but the 4'th iteration will return nothing and thus end our while loop. As you will see in the script in the WildDataFrontier application, we can then simply refer to the iteration number (will remain constant for the row being processed) and pull out our values for every row that is returned in the array. Whether it be 1, 10, 20, 100 or more.
Thus when we preview the flat/hardcoded method the results are usable
But when we preview this iterative approach the results are usable today, tomorrow and next year. So, you can remain as the Data Sheriff.
This post is part of a series aimed at ensuring you have the tools needed to Tame with Wild West Data Frontier, I mean Tame the Wild JSON Data Frontier you may be facing.
Posts:
Well howdy partner.
Back in that there big city all your data was nicely structured into tables and columns. Other than the chaos of a million requests your life as the data sheriff was pretty easy. You knew how to wrangle up any stray data with your "Selects" and your "Joins."
Like so many that hopped on a wagon train to the west in the past for a gold rush, your company hopped on board a train to a modern data stack. Now here you are with all your outlaw JSON data coming at you like rattle snake.
The good news is, that it's easy to read visually. At least it's easy to read when it's prettified.
But your problem is that your company needs you to translate it from this unstructured textual JSON data into structured values that they can gain insights from, and take action on. Just when you think you've figured out a pattern, you discover yet another JSON structure type.
Well partner, let's walk through the various types of JSON structures you will see together. I want to help ensure your JSON Data General Store is fully operational and that you know the patterns to looks for when any new varmints come moseying along.
In this post I will walk you through various examples, provide you a sample Qlik Sense Application that contains all of the types, and then provide a linked article explaining each type in more detail that you can access.
The most basic type of JSON structure is a flat one that simply contains a set of field and value pairs. The { and } are the textual wrappers that basically signify what you could think of as a single row for an entity. In a RDMS world, you would have column names and then values. In this JSON world you have field and value pairs. In this example image we have 4 fields/columns "outlaw_name", "bounty", "status" and "last_seen_town." Notice that the values for the fields are separated using the ":" character. If the values are textual, then like the field names, they are wrapped in double quote characters, while numeric data isn't.
A slightly more sophisticated JSON structure is one that includes a nesting if you will of the objects. Rather than simply using field value pairs like above, information surrounds entities. And entities can include other entities. In other words in an RDBMS world this would be multiple tables with primary key and foreign key relationships that would need to be joined.
You can distinguish an entity from a field based on the value. Notice that sheriff doesn't have a single value, it instead contains a block of JSON code. That block has 2 simple field:value pairs for name and badge_number and it also includes an entity called office. Unlike SQL where the office entity would exist on it's own, in our JSON structure it is nested into the office structure. While a minor distinction the point itself is major: You don't have a sheriff entity and an office entity. You have a sheriff entity and a sheriff/office entity nested within the sheriff entity.
Click here to access a post dedicated to helping you parse out Nested JSON Objects.
Like anyone panning for gold would tell you ... the only thing better than a gold nugget is finding more than 1. Any good bandit would tell you the only thing better than a town with 1 saloon, is a town with multiple saloons.
A JSON Array of (homogenous objects) is very much like a table in an RDBMS. Instead of just having the field:value pairs for a single row, the array contains all of the rows. The rows for the entity are wrapped in brackets "[" ... rows ... "]." In this image you can see we have a saloons entity and rather than field:value pairs for a single saloon, we have an array of saloons and each row of the array contains the simple field:value pairs.
Click here to access a post dedicated to helping you parse out JSON Arrays of Homogeneous Objects.
Up to this point we have been working with a single JSON object. A single string. You either open a file and voila there is one string. Or you make an API call and it returns one string.
Now we will focus on heterogenous JSON data that may be streaming into you. Rather than opening a file and finding one string you find multiple strings. You call an API and instead of returning 1 result string, it keeps sending you an unknown number of strings. What's worse is that you receive heterogenous objects. The type field seems consistent, but each type seems to present with a different entity: A report, a wanted_poster and a telegraph.
Don't panic data sheriff, I've got a posse of posts targeted at helping you handle these mixed JSON objects. In the first post we will look at handling this data pretty much like we've handled the JSON structures until now. In the second post I show you a much more flexible approach so that you can handle whatever desperados happen to ride into town.
If you look closely the following image is very similar to the image above that I used for the Heterogeneous (Mixed) JSON Objects post. The difference is that instead of being multiple rows, it's a single JSON block which contains multiple rows as an array.
It also looks very close to the image I used for the JSON Arrays (Homogeneous Objects) post above that one. The difference is that in the case of that post, the array was easy to understand as just being multiple rows for the same entity. In this case, each of the "rows" in our array our for different entities.
The good news for you is that you've already done the work to handle both arrays and heterogeneous data. That is assuming you've done the homework in both of those posts. The post I'm writing for handling this type of data will be brief. All I need to do is help you understand how to combine the two algorithms.
Click here to access a post dedicated to helping you parse out Heterogeneous JSON Arrays.
Just when you thought you had all those JSON array critters rounded up ... another one appears while you were in the saloon celebrating. While there is no formal name for this JSON Version 2 array, structured, seems to make sense. Notice that is very much like SQL in that the column headers and the data are sent separately. Which reduces the data redundancy when dealing with transmitting lots and lots and lots of rows of data.
This post is the starting post for a series aimed at ensuring you have the tools needed to Tame with Wild West Data Frontier, I mean Tame the Wild JSON Data Frontier you may be facing.
Posts:
The image depicts what is called a nested JSON structure containing a variety of both entities and field and value pairs. If I asked you what the sheriff's name was, you would instinctively say "Marshal Ada Lovelace."
Unlike the previous post Parsing: Flat JSON - Field Value Pairs this structure has more than just the fields and values it has entity structure. The name field isn't a standalone name, it's part of the sheriff entity. The sheriff's office isn't just a field ... it's a nested entity, that includes multiple field value pairs.
In the previous post I referred to above, I introduced you to a function called JsonGet. It allows you to give it a JSON block and ask it for the value of a specified field. Your likely question is "How do I ask it for a field that might be in a nested entity of another entity?"
The answer is by simply providing the qualified path to the field name. Just as I asked you what was the "sheriff's name," you would call the JsonGet() function and ask it for the /sheriff/name field. Assume that the JSON structure from the first image were loaded into a field called Nested_JSON_Block in a table called Nested. The following code would load all 4 of these unstructured data values into structured fields.
If you are familiar with SQL then this notion of qualifying the field name is nothing new to you. If you issue a Select statement from a single table all you need to do is list the field names in the select clause.
Select FieldA, FieldB, FieldC
From Table1
But when you join tables, especially if you have a field name in more than 1 table, you have no choice but to qualify the path for the field
Select Table1.FieldA, Table1.FieldB, TableA.FieldC
From Table1 ... blah blah blah join Table2
Instead of table.field notation like SQL, the JsonGet() function within Qlik Sense simply needs the path in /entity/field notation. If you have nests of nests of nests, it is the wild west of JSON data you might be pulling, that notation would continue /entity1/entity2/entity3/entity4/field.
If you are anything like me you don't like to assume anything, and you are a visual or experiential learner. So, go ahead and download the attached WildDataFrontier.qvf that is attached, upload it to your environment and open the load script.
{ Notice there are multiple sections. Each of them will pertain to a separate article and for this article the section named "2 - Nested JSON: Nested Objects" is the one you want to have at the top of your script for this post. }
Each section will begin with a common pattern. I will show you the prettified version of the JSON structure. Meaning it's indented, and thus easy to read. Then I will remove the line feeds so that the JSON structure being covered is a single line that we deal with as a single string for transformation within Qlik Sense. That single text/block is what API's will return or what you will read out of textual fields that contain JSON blocks from RDMBS systems.
For your learning, simply modify any/all of the field names, modify the JsonGet transformations and then load the data. Try adding:
As you go through your changes, the easiest thing to do is visualize the Preview at the bottom of the load editor.
Hint
As you try and make your changes if editing line 19 is to confusing and you struggle with the nesting, feel free to make your changes instead in the area that is commented out between lines 4-11. Then copy that prettifield block of code and change line 19 to only have the opening and closing single quote marks. Paste your copied block after the first single quote and start removing the line feeds. That's how I built the code for you so no shame in taking the easy route. 😁
This post is part of a series aimed at ensuring you have the tools needed to Tame with Wild West Data Frontier, I mean Tame the Wild JSON Data Frontier you may be facing.
Posts:
The image depicts the simplest of JSON structures, a flat list of field and value pairs. Your eyes can easily parse the data, so within a second if I asked you what the outlaw's name was, you would respond with "Billy the Byte." Because instinctively, and without instruction from me, you determined that "outlaw_name" and "Billy the Byte" went together as a PAIR.
Qlik Sense provides a very easy to understand function called JsonGet that does exactly the same thing that your eyes did. You simply hand it the JSON Block your JSON is in, and tell it the field name you want to parse out of the structure, and it "gets" the value that is paired with the field name you gave it.
Let's assume that the JSON structure is loaded into a field called JSON_Block in a table called Flat, the following code would parse this unstructured textual JSON block into structural field values that you can utilize like you do any of the other 93,302,203 disparate sources you utilize in Qlik Sense every day.
If you are anything like me you don't like to assume anything, and you are a visual or experiential learner. So, go ahead and download the attached WildDataFrontier.qvf that is attached, upload it to your environment and open the load script.
{ Notice there are multiple sections. Each of them will pertain to a separate article and for this article the section named "1 - Flat JSON: Key-Value Pairs" is the one you want to have at the top of your script. }
Each section will begin with a common pattern. I will show you the prettified version of the JSON structure. Meaning it's indented, and thus easy to read. Then I will remove the line feeds so that the JSON structure being covered is a single line that we deal with as a single string for transformation within Qlik Sense. That single text/block is what API's will return or what you will read out of textual fields that contain JSON blocks from RDMBS systems.
For your learning, simply modify any/all of the field names, modify the JsonGet transformations and then load the data.
As you go through your changes, the easiest thing to do is visualize the Preview at the bottom of the load editor.
That way you can see what the flat JSON structure/text block looks like:
As well as seeing how the Information is parsed with that simple JsonGet statement:
This post is part of a series aimed at ensuring you have the tools needed to Tame with Wild West Data Frontier, I mean Tame the Wild JSON Data Frontier you may be facing.
Posts:
When Connecting to a SharePoint side and select Data an Error occurs directly on Open
Adding the Path or at least the initial Folder (like Shared Documents for default Document Storage) will solve the issue.
Once added you can browse and select Files you have access to.
The SharePoint File Connector will always initially open the Root Folder by Design causing the Access Error.
A small demo with a link on how the layout container can be used
Few buttons
Treemap and images can transform your static dashboard into interactive report.
In hte attached link you will find a qvf with all necessary information.
Link: LC Example