Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
THE SITUATION: You've proven yourself with unstructured data across Field Training Modules 1-3. You've mastered knowledge bases, advanced chunking, and contextual intelligence. Now it's time to shift gears. The real power of Q Division emerges when you combine unstructured documentation WITH structured data that resides in live applications.
YOUR MISSION: Get access to the Q Division Application - the actual case tracking system that our operatives use in the field. Upload it to your environment, make it available to Qlik Answers, and ask questions about the structured data within. You'll meet two NEW agents: the Semantic Agent and the Data Analyst Agent.
THE MYSTERY: Figure out exactly what the "Active Case Percentage" KPI showing?
DELIVERABLE: A fully indexed Qlik application connected to Qlik Answers, with the ability to ask natural language questions about structured data and receive detailed explanations.
PREREQUISITES: ⚠️ Complete Modules 1-3 first to understand knowledge bases and unstructured data. This module introduces structured data as a contrast and foundation for Module 200 where we'll combine BOTH!
What You'll Need:
Download Mission Pack: 📥 Q Division Application attached
Video Intelligence Briefing: 🎥 Watch the Full Mission Walkthrough
Navigate to your Q Division Field Academy space in your Qlik hub.
Click to upload a file and select the QDivision.qvf file that you downloaded from the community page.
The upload process should take just a few seconds. Once complete, you'll see the Q Division application appear in your space.
Click to open the application - let's take a quick tour before we make it available to Qlik Answers.
⚠️ FIELD NOTE: This is the KEY step that connects structured data to AI!
Here's something you need to know about ANY of your apps - whether it's this Q Division application we just uploaded, or one that you already have on your site:
You must explicitly enable Qlik Answers access in the application settings.
Why? Not all of your applications should be made available. When you're doing development on things, you don't want to confuse users with 18 versions of the same application. Governance matters!
Here's how to enable it:
Watch what happens:
You'll notice a message appear saying "Indexing this application for you..."
This is similar to knowledge base indexing, but instead of parsing PDFs, it's analyzing:
Once it's done, you'll see a message: "Indexing complete"
The toggle will flip all the way and turn green ✓
You're ready to go!
We can start jumping into Answers, but let's hold off on pressing that button just for a second. Let's go to the dashboard - because you know the dashboard is the place to be!
Take a look at this very well-thought-out application:
What you'll see:
🤔 WAIT A MINUTE...
I don't know about you, but in 007 Dork's mind, that should be 31. What in the world? That doesn't add up!
I wish we had the original programmer to understand these values and what we might be missing...
🎯 SPOILER ALERT: Don't worry, operatives! In Field Mission 200, we're going to take a look at this because that is obviously a problem. And what we're gonna do in Mission 200 is look at questioning structured AND unstructured data together. If you'll recall from Module 2, I set this up for this very scenario when we talked about case status values! wink wink
For now, let's move on to the next question...
Go to the screen labeled "Field Mission 100" in the application.
You'll see it tells us that our Active Case Percentage is 16%.
Now that'd be nice if users could just... ask a question and get that KPI defined for them, beyond just the brief description in the Master Item, right?
Well, guess what? THEY CAN!
From within the application, click to open Qlik Answers
I'm going to paste this question right in here (because you've seen my typing, and we don't have all day):
"Can you describe for me in detail what the Active Case Percentage value is?"
Now watch the magic happen...
Not surprising, the Answer Agent is the first one to show up. Whether it's structured data or unstructured data, the Answer Agent is ALWAYS gonna be first, trying to figure out what you're looking to do.
But now watch as it goes through different processes behind the scenes...
The Flow:
1️⃣ Answer Agent (Orchestrator)
2️⃣ Semantic Agent ← NEW AGENT ALERT!
3️⃣ Data Analyst Agent ← ANOTHER NEW AGENT!
4️⃣ Answer Agent (Response Synthesis)
Look at what the AI returns:
"Active Case Percentage is a key performance indicator in our Q Division data. It shows you the percentage of cases that are currently Open or In Progress."
Notice what it did:
Business Context: "This metric helps Q Division management understand workload distribution and resource allocation needs."
Interpretation Guidelines:
Related Metrics You Might Want to See:
🤯 MIND. BLOWN.
The AI didn't just tell you what the KPI is - it gave you HOW to interpret it, WHAT benchmarks to consider, and WHAT other metrics complement it!
Click to expand "Show reasoning" or "Show details" to see the full agent workflow.
You'll see the complete conversation:
This transparency is CRITICAL for:
What You've Accomplished:
Validation Check: Can you ask your Q Division application questions about KPIs and get back detailed explanations including calculations, business context, and interpretation guidelines? If yes, you've mastered structured data querying! 🎯
The Mystery Remains: We still haven't solved why 15 + 16 = 50 instead of 31. And THAT, operatives, is where Module 200 comes in...
Challenge Exercise (Optional): Ask other questions about the Q Division application:
See how the Semantic Agent and Data Analyst Agent work together to provide comprehensive answers. Don't be surprised if a new Q Division assists!
You've successfully shifted gears from unstructured to structured data. You've met two new agents who specialize in understanding your applications' vocabulary and interpreting what your KPIs mean in business context.
Questions? Feedback? Are you becoming more comfortable with the Agentic experience of agents working together?
Welcome to Q Division Headquarters, Operative.
Behind these doors lies the future of AI-powered analytics. Qlik Answers isn't just another tool—it's your answer intelligence platform that lets anyone ask natural language questions of their unstructured data, their structured data, or both working together in perfect coordination.
Your mission, should you choose to accept it: Master the Q Division agent swarm architecture, earn your Field Operative certification, and deploy answer intelligence across your organization.
Inside, you'll meet our specialist agents, complete hands-on training exercises, watch live mission playbacks, and prove your tactical intelligence through the Agent Recognition Protocol.
By the time you exit these doors, you won't just understand Qlik Answers—you'll be ready to implement your (or guide others through) first deployments with confidence.
The briefing room awaits. Enter when ready, Operative.
Agent Roster
Q Division operates on what the intelligence community calls a "swarm architecture" – the industry gold standard for AI agent collaboration. Instead of relying on a single agent to handle every mission, we've assembled a specialized team where each agent excels at their specific domain. When you ask a question, our system intelligently identifies which agents have the expertise needed and orchestrates a precision handoff sequence to deliver the most accurate answer.
Think of it like a real intelligence agency: you wouldn't send the same operative to handle cryptography, field reconnaissance, AND financial analysis – you'd send specialists who work together, each completing their part of the mission before passing critical intelligence to the next agent. That's exactly what Qlik Answers does, ensuring you get enterprise-grade accuracy through expert collaboration. Meet the agents who'll be working your missions:
Operation: Swarm Intelligence - Agent Dossiers ►
Welcome to active duty, Operative. In this section, you'll receive the same intelligence assets that Q Division uses in live operations: a fully configured Qlik Answers application, pre-loaded knowledge bases, and the Answer Assistant framework that orchestrates our agent swarm. This isn't a simulation – these are production-grade materials that you'll download, deploy, and interrogate with real questions. You'll see firsthand how questions flow through the agent network, learn to craft queries that leverage each agent's expertise, and build the muscle memory needed to guide others through their first Qlik Answers deployment. By the end of these exercises, you won't just understand the theory – you'll have hands-on experience running actual missions.
Unstructured Data
🎯 Q Division Field Training: Module 2 - Application Documentation - Building another Knowledge Base and enhancing your Assistant wit the additional knowledge.
🎯 Q Division Field Training: Module 3 - Expense Statements - Building an enterprise grade Knowledge Base with Advanced Chunking and enhancing your Assistant with the additional knowledge
Structured Data
🎯 Q Division Field Training: Module 100 - Uploading Q Division Operations Application - Asking Answers
Unstructured + Structured
Field Operative, it's time to see the agents in action. In this section, you'll watch live mission recordings where real questions trigger the full agent swarm workflow. On one side of your screen, you'll see the complete Qlik Answers solution being constructed in real-time. On the other, you'll see which Answer Agent is currently on mission – giving you a visual understanding of who does what, when they're called into action, and how they hand off intelligence to the next specialist in the sequence. Here's where it gets powerful: since you already have the same materials from Field Training, you can run these exact same questions in your own Qlik Answers environment and watch your agents work the mission alongside mine. These aren't just recordings to watch passively – they're your playbook for getting comfortable with the agent workflow before you guide others through it.
Final assessment, Field Operative. Before you earn your Clearance Level certification, you need to prove you can recognize which agents handle which intelligence requests. We'll present you with real-world questions – the kind partners and customers will actually ask – and you'll identify which Answer Agent(s) will be deployed on the mission. This isn't about memorizing definitions; it's about developing the tactical instinct to know instantly: "That's a Data Agent question," or "This one needs both Knowledge and Visualization working together." Pass the Agent Recognition Protocol, and you'll have earned more than a certification – you'll have the operational confidence to guide anyone through their first Qlik Answers deployment
The name's Dork. 007 Dork. They say you're only as good as your questions. Well, lucky for you, I never miss.
THE SITUATION: Missions one and two involved rather static data. Today we're dealing with something much more near real-time: expense statements. After each Q Division mission closes, travel and expense statements get generated and logged. But something caught my eye while walking through accounting... a 4TB external hard drive and an encrypted USB drive on an agent's expense report. That seems a little "sus" to me, operatives. Double agent? Corporate espionage? Or legitimate operational expense?
YOUR MISSION: Build a knowledge base around agent expense statements using enterprise storage connections (like Amazon S3 buckets where files are constantly being inserted), enable advanced accuracy for complex multi-page tables, add this intelligence to your Field Training Assistant, and then test a theory about whether we've got a rogue agent on our hands.
DELIVERABLE: An enhanced assistant with three knowledge bases that can perform forensic accounting analysis across complex expense documents, understanding context that spans multiple pages and document sections.
PREREQUISITES: ⚠️ You must complete Modules 1 & 2 first! You'll need your existing Field Training Assistant with Agent Information and Application Design knowledge bases already configured.
What You'll Need:
Download Mission Pack: 📥 QDivision_Field_Expense_Reports.zip attached
Video Intelligence Briefing: 🎥 Watch the Full Mission Walkthrough
Navigate to your Answers section from your hub (not your applications - we don't want to see all that other junk cluttering our intelligence operations).
Click to create a new knowledge base and name it: "Agent Expense Statements"
Normally you'd add a description here, but we're field operatives on a mission, so let's keep moving!
⚠️ FIELD NOTE: This is NEW and IMPORTANT!
Before adding files, toggle on the "Enhanced Accuracy" flag.
Why? These expense statements contain:
Enhanced accuracy uses advanced chunking to handle these complex document structures. It takes a bit longer to process, but when you see what's in these expense statements, you'll absolutely understand why we need it.
The Trade-off:
For expense forensics, we need enhanced accuracy. Period.
Here's where it gets interesting. I'm going to show you TWO approaches:
Click "Add from connection" and select your space.
Choose your connection - in my case, "Q Division Expense Statements" which points to an Amazon S3 bucket.
The Power of Connections: When looking at enterprise file storage connectors, you can set up filters:
The Magic: As new expense statements get dropped into your S3 bucket, they can be indexed without manual uploads.
You can either:
In this scenario, the files stay in S3 - they're not copied to your Qlik tenant. You're indexing references to enterprise storage.
I would now select each expense statement from my bucket and add them to my knowledge base.
But wait... my poor field operatives out there don't have access to MY S3 bucket, and I'm not giving you my secret key! That's classified information!
So here's what we're ACTUALLY going to do:
Click "Add files" and choose "Browse" to upload files directly.
Navigate to your unzipped mission pack containing the Q Division expense statement PDFs.
Select ALL 15 expense statement files and upload them.
You'll see them loading into your knowledge base.
I'm not even going to suggest it because you're going to wag your finger at me...
These are NOT indexed yet. You are NOT ready to use these until you've indexed them!
Click "Index All"
Now, because we turned on Enhanced Accuracy, this is going to take longer than our previous modules. Don't panic!
What's happening behind the scenes:
Watch the progress. You'll start seeing files complete their indexing. Keep scrolling to monitor status.
Wait for completion: You should see 39 pages across 15 different documents indexed.
Refresh and verify: "Index Status: Completed" with a recent timestamp (if it says "5 weeks ago" when you come back in 5 weeks, we've got problems!).
We're NOT creating another assistant. We want to tie ALL this intelligence together!
Navigate back to the Answers catalog. With all those files and knowledge bases accumulating, use the filter to show "Assistants and Knowledge Bases only" so you can find what you need.
Open your "Field Training Assistant" (the one you created in Module 1 and enhanced in Module 2).
Click to add content, then select "Add a knowledge base".
Filter to your "Q Division Field Academy" space.
Select "Agent Expense Statements" - notice it's the only one NOT grayed out (the others are already connected).
Click to add it.
Boom. Your assistant is now ready. Everything is indexed. Your agent has three knowledge bases:
This is business, operatives!
Expand your assistant chat interface. I'm going to paste this question because you've seen my typing in other modules - it can be pretty bad:
"I reviewed the expense statement in the knowledge base for Case 103, and it seems suspicious to me that the agent purchased hard drives and a USB. Does that raise any red flags with you? Are they understandable?"
Before we see the AI's response, let me show you what I saw...
Scroll through the expense statements for Case 103. You'll find on November 16th:
I don't know why an agent who's out in the field getting wined and dined and meeting with clients is buying hard drives! That raises a red flag to me. If I were a human auditor reading this, that seems a little flaky!
Now watch what happens...
You already guessed it - yes, the Answer Agent is on the job right away:
The Knowledge Base Agent gets involved next:
I came in here assuming this agent was up to no good. I happened to read an expense statement. I think something's flaky. This is crazy!
But here's what the AI comes back with:
"The hard drive and USB are legitimate operational expenses. They don't raise any red flags. Let me give you the context: The case involved Operation False Precision, a data center investigation. These purchases are justified given that they were conducting forensic analysis of an ETL pipeline and code. All expenses comply with operational requirements."
WAIT, WHAT?!
When I jumped in to show you those suspicious expense line items, I didn't show you the mission notes at the top of the expense statement that documented the investigation activities and timeline!
Let me be crystal clear about what just happened, operatives:
This isn't just a search-and-find operation.
The other modules had super easy questions. I want you to understand the logic going on here - the collective wisdom of the world that's in that large language model that Qlik Answers is sitting on top of.
Here I am trying to do forensic accounting. I get the wisdom of the world saying:
"Whoa, whoa, whoa, 007 Dork! You've got binoculars on and you are FOCUSED on that expense line, and that is NOT what you need to see. You need to see the BIG PICTURE of what was going on!"
The AI was able to interpret from both contexts in a knowledge graph - these things are related:
It connected the dots across multiple pages and document sections.
Operatives, you gotta be loving that! If you're not ready to dig in even deeper, I don't know what's gonna get you excited about Qlik Answers!
What You've Accomplished:
The Big Lesson: As a young Dork, I found that my focus could be very narrow. I would see one piece of information and jump to conclusions. If there's one thing I've learned here in Q Division, it's that the real story usually involves the ability to see a much larger context - one that's larger than even my Dork brain can handle.
Asking Qlik Answers is my way of ensuring that all the elements are being accounted for, and that in conjunction with the collective wisdom of the world in that large language model, the answers to my questions make me look a whole lot smarter.
Your Mission (Should You Choose to Accept It): Ask ONE better question today.
Next Mission: Module 4 will introduce data connections to live Qlik applications, combining structured data WITH all this unstructured intelligence. The Q Division Operation Data application will finally be revealed!
Challenge Exercise (Optional): See how far we can push this concept. Ask your assistant the following question: "The agent from Case 1006 returned from the mission acting a little sus, and an English-Russian dictionary slipped out of his attaché case. Please Conduct a comprehensive counter terrorism review of the expenses for case 1006 and please flag anything suspicious or out of scope for the mission, the target or the environment."
Why Use Storage Connections Instead of Direct Upload?
How Context Understanding Actually Works
When I asked about the suspicious hard drive purchase, here's what happened technically:
Step 1: Query Processing
Step 2: Vector Search
Step 3: Knowledge Graph Assembly
Step 4: Collective Wisdom Application
Step 5: Response Generation
This is NOT:
This IS:
The Difference Between Search and Intelligence
Traditional Search Would Return:
Qlik Answers Returns:
Real-World Application: This is the difference between:
Use Cases Where This Matters:
You've graduated from simple document retrieval to contextual intelligence that understands relationships across complex multi-page documents. You've seen firsthand how AI can provide the larger context that even a focused analyst might miss.
Your Field Training Assistant now has three knowledge bases working in harmony. It can answer questions about agents, application design, AND financial operations - all with citations and contextual understanding.
Remember: The most dangerous weapon in your arsenal isn't a golden gun - it's a golden question! And sometimes, the best answer is the one that shows you what you DIDN'T know to ask about.
Dork 007 Dork, signing off. Keep your chunking advanced and your context windows wide.
Questions? Feedback? What did the challenge question yield for you in terms of results and which agent do we need to investigate if Qlik Answers confirmed anything suspicious? 👎 Use the feedback button or share your forensic accounting stories
#QlikAnswers #QlikSense #DataAnalytics #BusinessIntelligence #AIAssistant #RAG #QlikDork #QDivision #007Dork #AdvancedChunking #EnhancedAccuracy #ForensicAccounting #ContextualAI #SemanticSearch #DocumentIntelligence #AIReasoning #KnowledgeGraph #AskBetterQuestions
The name's Dork. 007 Dork, and I have a license to question.
THE SITUATION: A customer has experienced turnover in their development department. There's yelling. Lots of yelling. "WHAT DO THESE VALUES MEAN?!" echoes through the halls. Dashboard fields are a mystery. Status codes are hieroglyphics. Chaos reigns.
YOUR MISSION: Add unstructured application design documentation to the Qlik catalog, create a new knowledge base from those existing files, and enhance your Field Training Assistant with this intelligence. Then query the original programming team's documentation to understand what those mysterious values actually mean.
DELIVERABLE: An enhanced assistant that can answer questions about both agent information AND application design documentation, demonstrating how knowledge bases can share files and grow in power over time.
PREREQUISITES: ⚠️ You must complete Module 1 first! You'll need your existing Field Training Assistant from that mission.
What You'll Need:
Download Mission Pack: 📥 ZIP file attached
Video Intelligence Briefing: 🎥 Watch the Full Mission Walkthrough
This time, we're taking a different approach. Instead of uploading files directly to a knowledge base, we're adding them to the Qlik catalog first. Why? Because catalog files can be shared across multiple knowledge bases and assistants - upload once, use everywhere!
Navigate to your Qlik hub and locate the file upload area.
⚠️ CRITICAL: At the bottom of the upload dialog, verify the space selector is set to "Q Division Field Academy" - do NOT let it default to your personal space!
Unzip your mission pack and drag and drop both PDFs into the upload area:
Watch as they load into your catalog. You now have 2 additional files in your Q Division Field Academy space.
Click to create a new knowledge base and name it: "Q Division Application Design"
This knowledge base will contain information about how your Q Division application (which you'll build in a future module) was originally designed by the programming team.
Here's where it gets interesting. In Module 1, we walked through browsing and uploading files directly. But now those files already exist in your catalog!
Instead of the "Upload files" option, choose "Use from catalog".
Filter to show files from "Q Division Field Academy" space.
Select both documents:
Click to add them to your knowledge base.
If you're wagging your finger at the screen right now saying "But 007 Dork, you forgot to index!" - EXCELLENT! You're learning!
Navigate to your new "Q Division Application Design" knowledge base and check the index status. It will show "Never been indexed."
Click "Index All" and wait for completion. Each document should index quickly.
Refresh and verify: "Index Status: Completed" ✓
Here's the power move. We're NOT creating a new assistant. We're making your existing assistant smarter by adding a second knowledge base to it.
Navigate to the Answers section and find your assistants.
Pro tip: If you're lost in a sea of documents, use the filter at the top to show "Assistants only" - you'll recognize them by their distinctive icons.
Open your "Field Training Assistant" that you created in Module 1.
You'll see it already has access to the "Agent Information" knowledge base (it's even grayed out to show it's already connected).
Now click to add another knowledge base. Filter to "Q Division Field Academy" space.
Select "Q Division Application Design" and add it.
Boom. Your assistant now has access to TWO knowledge bases. This is how assistants grow in power over time!
Time to test your enhanced intelligence network. Expand the assistant chat interface and ask:
"Can you help me understand how agent case status is tracked and what the values mean?"
Watch the reasoning panel (because we're operatives, not civilians):
You should receive information about the case status phases:
Pay special attention to this distinction: "Resolved" means the agent's work is done, but the case is NOT physically closed yet because we're waiting on client verification.
🎯 FIELD NOTE FOR FUTURE MISSIONS: Remember this distinction between Resolved and Closed! In an advanced training module, we're going to revisit this question when building applications. If you're not aware of this difference, your case number counts won't add up correctly, and you'll be scratching your head wondering why. Hint, hint, wink, wink!
Click on the citations to see exactly where in the documentation this information came from. Jump directly to the source documents if you want to read more context.
What You've Accomplished:
Validation Check: Can you ask your Field Training Assistant questions about BOTH agent information (Module 1) AND application design (Module 2)? If yes, your assistant is now multi-talented! 🎯
The Big Picture: You've just learned how to scale your Q Division intelligence network. As you find more sources of unstructured data - design docs, meeting transcripts, training videos, data dictionaries, programmer notes - you can add them as new knowledge bases. Your assistants grow smarter over time without starting from scratch.
Challenge Exercise (Optional): One of the documents now available to you in the assistant describes the application. Find out how much you can learn about the application data model and master items without reading the PDF. You will soon earn the privilege of accessing the Q Division operational data application.
Why Upload to Catalog vs. Direct to Knowledge Base?
Catalog Upload Benefits:
You've successfully enhanced your Field Training Assistant with application design intelligence. Over time, your assistants can grow in power as you find more and more sources of unstructured data and create more and more knowledge bases.
Your assistant started with only the ability to answer questions about agents themselves. Now it also supports questions about application design from the original programming staff - an application that will be revealed to you soon, operatives. But let's face it: you have more training to do before you're entrusted with Q Division Operation Data.
Remember: The most dangerous weapon in your arsenal isn't a golden gun - it's a golden question! 🎯
Dork 007 Dork, signing off. Keep your documentation indexed and your queries semantic.
Questions? Feedback? In this AI powered world with meeting transcripts automatically generated, what do you think of this idea of a knowledge base that stores them? Have you had situations in the past where a developer had left and you really could have used access to their original documentation? 👎
Operatives, this is Dork, 007 Dork reporting from Q Division headquarters. Unlike my data, I prefer my Mountain Dew shaken, not stirred.
THE SITUATION: The dreaded Dashboard Disruption Monkey Gang is on the loose again, and we need to find which agent has experience dealing with them. Fast.
YOUR MISSION: Create your first Qlik Answers knowledge base to track Q Division agent dossiers, then build an assistant that can query this intelligence on demand.
DELIVERABLE: A fully functional knowledge base containing agent information with an assistant capable of answering questions about your operatives.
What You'll Need:
Download Mission Pack: 📥 You will find the QlikAnswers_SwarmAgents_Dossiers.zip attached
Video Intelligence Briefing: 🎥 Watch the Full Mission Walkthrough
First, let's set up your training environment. Navigate to your Qlik hub and create a space called "Q Division Field Academy" (or use an existing space - this is your headquarters for all future training missions).
Once in your space, navigate to the Answers section in the hub where you'll see options for assistants, knowledge bases, data connections, and file uploads.
Click to create a new knowledge base and name it: "Agent Information"
Pro tip: Normally you'd add a detailed description here. For field training, we're moving fast, but in production you'd want to document what this knowledge base contains and its intended use.
You'll see three options for data sources:
Unzip the mission pack you downloaded and you'll find 7 agent dossier PDFs.
Simply drag and drop all 7 files into the upload area. You should see all seven appear in your upload queue.
Click "Upload" and watch as Q Division's finest get cataloged into your system.
⚠️ FIELD NOTE: Here's where rookies often get tripped up!
After upload, check the Index Status. It will say "Never been indexed" - this means the RAG (Retrieval-Augmented Generation) system hasn't parsed your documents yet. You cannot query unindexed data.
Click "Index All" and switch to the flat view to watch the progress. With only a handful of pages per dossier, this should complete in seconds.
Refresh your screen. When you see "Index Status: Completed" with a timestamp, your intelligence is ready for deployment.
Now let's build an assistant that can query this knowledge base.
Click to create a new assistant and name it: "Field Training Assistant"
Add your Agent Information Knowledge Base.
We'll cover conversation starters in a future module - these are pre-written prompts that help users know what questions to ask.
Time to validate your setup. Ask your assistant:
"Which agent has interacted with the dreaded Dashboard Disruption Monkeys?"
Watch the reasoning panel (this is where the magic happens):
Click on the citation link and it will jump directly to the source document, highlighting exactly where that information was found in the dossier.
What You've Accomplished:
Validation Check: Can you ask your assistant "Which agent dealt with the Dashboard Disruption Monkeys?" and get back "Assembler Agent" with a citation? If yes, mission accomplished! 🎯
Challenge Exercise (Optional): Try asking other questions about your agents. What skills do they have? What operations have they completed? Test the limits of what your knowledge base knows!
What's happening with RAG and indexing?
When you upload PDFs to a knowledge base, Qlik Answers uses RAG (Retrieval-Augmented Generation) to:
Until indexing completes, the content is just raw files - the AI can't "see" it yet. Think of indexing as translating your documents into a language the AI agents understand.
Understanding the Agent Reasoning Flow
Qlik Answers uses multiple specialized agents:
Answer Agent: The orchestrator. Receives your question, determines what data sources are needed, coordinates other agents, and formats the final response.
Knowledge Base Agent: Specialized in searching unstructured documents. Uses semantic search to find relevant passages and return citations.
This multi-agent approach allows each specialist to do what it does best, similar to how Q Division has different operatives with different skills!
Every answer includes citations showing exactly where the information came from. This is critical for:
In enterprise analytics, "because the AI said so" doesn't cut it. Citations provide the audit trail.
You've successfully completed your first Q Division field training mission. You're now equipped to turn unstructured documents into queryable intelligence using Qlik Answers.
Remember: In analytics, as in espionage, the right question is more valuable than a thousand answers.
Dork, 007 Dork, signing off. Keep your data shaken and your queries stirred.
Questions? Feedback? Spotted a Dashboard Disruption Monkey? 👎 Use the feedback button or reach out to your Q Division training coordinator
If you were looking for a super deep technical explanation of each of the Qlik Answers Agentic Agents ... you've come to the wrong place.
As part of my Q Division training series I wanted humanize each of these #AI titans for you just a little bit. After all just seeing their names on the screen while they are working hard on your behalf is really impersonal.
This is from an official presentation about the Agentic Agents that are part of Qlik Answers.
But as the Dork, 007 Dork, I assumed you would want a little more understanding. So, each of their, totally fictitious, and hilarious, dossiers is attached.
Simply download the zip file and spend as many hours laughing as you read each and every page about each of the agents. No self-destructing. The files will remain as long as you want. Page after page of fun, mixed in with occasional insight.
But ensure their protection! They are highly classified and for your eyes as a Q Division operative in training only.
Forget microwaved analytics. In these courses, you'll learn to build AI-assisted Qlik applications and dining experiences with the precision and care of a master chef. As Sous Chefs in Chef Qlik Dork's kitchen, you'll master all of the features of what Qlik MCP offers you:
⚙️ Data Products - Starting with trusted ingredients (metadata, quality, governance)
⚙️ Building Screens - Plating your creations for YOUR DINERS with story-driven design
⚙️ Building Code - Pushing down predictable, repeatable code to your Qlik ovens
⚙️ Asking Questions - Teaching your diners to become Chief Question Officers
⚙️ Paradigm Shift - Understanding the transformation from builder to orchestrator
As each culinary course is developed they will appear below, but this brief introductory video will help you understand what is coming when Qlik MCP Server functionality is released in your #SaaS environments on February 10, 2026.
The goal of the courses here at the Cordon Green is to help your organization go from an ordinary agentic experience, to one that is EXTRAordinary.
👨🍳 Course 100 - Learning to create your Secret Sauce
Sharing my Secret Sauce to get you started
👨🍳 Course 120 - Defining the Organizational Gold Standard
👨🍳 Course 125 - Helping an LLM understand your context with Smart Defaults
👨🍳 Course 130 - The role of a Chief Skills Officer
👨🍳 Course 200 - Moving from Questions to Conversations
👨🍳 Course 210 - Security and Filters
👨🍳 Course 220 - Context Wars - Google vs LLM vs Human
👨🍳 Course 230 - Metacognitive Analytics - Thinking about Thinking
👨🍳 Course 250 - Chief Question Officer
👨🍳 Course 401 - Create Qlik application from Snowflake OSI Semantic View
👨🍳 Course 501 - Create Qlik application from Qlik Data Product
In a previous post called Calling Snowflake's Cortex Agent API, I started the post by saying the focus of the post was about how the REST API worked not how to create the connection to it. This post is focused solely on how to create a REST API connector for Snowflake Cortex Agents.
Snowflake allows you to build "agents." While I have built several, this post is going to focus on one that I called "SynthAgent."
They also provide a way you for you to question them directly in their user interface. I can simply ask a question like "Who are the top 10 admitting providers?" and voila ... I get an answer:
Actually as my previous post indicated, when you talk to an agent you actually get to see it's entire stream of thought.
While asking an isolated question inside of Snowflake is certainly handy-dandy, the bigger picture is in the fact that in the world we live in, agents should play nice with others. So, Snowflake created an API that allows others outside of the Snowflake user interface to speak with their data.
Obviously you will need to use your own server and your own agents, but to help you understand the URL I wanted to show you the URL I will use in this post for this Cortex Agent API. Refer to the first image to see the database/schema/agent:
https://AYFR...mputing.com/api/v2/databases/SNOWFLAKE_INTELLIGENCE/schemas/AGENTS/agents/SYNTHAGENT:run
While it's a rather complicated URL path, it is in fact nothing more than a URL path. Which is the most important thing that the Qlik Rest API Connector needs. Begin by creating a new connection. Be sure and choose the correct space where you want this new connector created and simply choose "REST".
Paste in the correct URL and choose the POST method:
After choosing POST as the method, the connector is going to prompt you for the BODY of the call.
My agent is built from a Snowflake Semantic View. Based on that, here is the sample body that I passed to my SYNTHAGENT URL:
{
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Who are the top 10 admitting providers?"
}
]
}
],
"tool_choice": {
"type": "auto"
}
}
Notice that I'm asking the exact same question that I asked directly in Snowflake so that I can test the results I get back to ensure it is working.
Based on the Snowflake Cortex Agent API documentation, you will need to configure a parameter and a few headers that are passed when calling the URL.
You need to have a parameter named Response-Type with the value text/event-stream
You need to create 4 Query Headers.
Authorization - This is to define the security for the API call. Snowflake provides multiple types of security. In this example I'm using a token model. To create your API Token, called Programmatic access token (PAT) in Snowflake you simply go to your profile, go to Settings and choose Authentication. Then Generate new token.
When you create the PAT, simply copy the value and in the value field type Bearer and then paste in your PAT.
User-Agent use QlikSense.
Accept simply use */*
Content-Type use application/json
You had to hard code your question in the body, but in the real world in which you live are you always going to make all users, always see answers for just that question?"
This is when you need to reply out loud in your biggest voice "OF COURSE NOT!!!!"
Then I would say "So check that box that says Enable Dynamic Parameters" or any of the alternatives shown:
Because any of those check boxes would make total sense to you "Oh I need to check the box that allows me to change the question based on what my user asks."
Instead I need you check the box that only makes sense, after you know what it does. It's the check box that says Allow "WITH CONNECTION"
It will make sense in a few minutes why the button has that name. For now just now it relates to any of the 3 button names I wish it had. 😁
Finally give your connection a name and press the Test connection button.
If all of your security is configured correctly in Snowflake (Click here to see all of the things you need to ensure in Snowflake itself.) then you will get a Connection succeeded result.
If you see an error like "Unsupported Accept header null is specified. It simply means that you followed instructions from some other post and left out the Accept */* that I showed above an retest the connection.
Once you see the Connection succeeded save your connection so that we can actually make the call to the agent and see how the results look.
Congratulations you know have a connector in place to call your Snowflake Cortex Agent REST API. Woohoo. 😎
Click the Select data button for the connector
It will default to CSV for the response type which is perfect.
The Delimiter should be Tab.
Check the box for CSV Has header.
Check the CSV_source box.
Once you check that box it will go and make the call. If you see results like below you know that have in fact successfully asked your Snowflake Cortex Agent that question that was hard coded.
Press the Insert script button to actually insert the script block into your load script. Isn't this great I can reload my application and get answers to the same question over and over and over again. We are really living large. Right?
Our connector is working great, but I'm still hung up on that whole "same question over and over and over again thing." Logically what we would like to do, and more importantly can do ... is call the REST API and give it the question on the fly. In other words we would need to give it the BODY for the the call on the fly ... like this:
WITH CONNECTION is the syntax key word that is added to calls to REST API Connectors. Now I get why that checkbox that didn't make sense is called Allow WITH CONNECTION. It is letting you declare when you create the connection if the developers with access to the connection are allowed to do that or not. Glad they didn't rename them with any of the names I suggested. We would have a nightmare on our hands.
Now I have the ability to create a variable with the body, that has whatever question I want in it:
When you look at the results you will realize that the Snowflake Cortex Agent API does not just return the answer to our question. It returns an entire event stream. Which is precisely why when we set the Response-Type parameter initially, we needed to set it to text/event-stream.
Great news is you have successfully created your REST API connector so you can talk to the Snowflake Cortex Agent. The next step is to check out my previous post where I walk you through how to process the results. https://community.qlik.com/t5/Member-Articles/Calling-Snowflake-s-Cortex-Agent-API/ta-p/2535284
PS - This post was planned for a later date. Be sure to thank @chriscammers for requesting it sooner. 😃
If you are a Snowflake customer you have probably seen the left side of this image frequently. Snowflake Intelligence is legit cool and you've dreamed of ways for it to impact your business.
If you are a Qlik and Snowflake customer you have probably seen the left side of this image frequently, and thought "Wow I sure wish I could take advantage of Snowflake Intelligence within my Qlik environment to impact my business. Feel free to do your celebration dance because this post is designed to walk you through how Qlik can work with Snowflake Cortex AISQL as well as Snowflake Cortex Agents (API).
Both series are designed in 3 part Show Me format. The first video for each will frame the value you can attain. The second video will help you drool as you begin imagining your business implementing the solution. Finally I conclude each series for those that get tapped on their shoulder to actually make the solutions work.
In this comprehensive three-part series exploring the integration of Qlik and Snowflake Cortex AI-SQL, I guide viewers from executive vision to hands-on implementation. While demonstrating how organizations can democratize AI capabilities across their entire analytics ecosystem—without requiring data science expertise.
This series demonstrates how to combine Qlik's associative analytics engine with Snowflake's AI-powered semantic intelligence to transform natural language questions into interactive, fully contextualized insights.
Heck yeah we've got both covered.
Healthcare Synthetic Data Set -> Semantic View -> Build Qlik Sense Application through Claude and Qlik MCP - This demo begins by pulling the information out of a Semantic View for the shared Healthcare Synthetic data set. Huge tables. Constructs the code to load the tables into Qlik Sense including concatenating the Patient and Encounters fact tables and creating concatenated keys for the dimensional tables. What about all of that wonderful metadata about the fields? Yeah we pull that in as well because governance is important. Then we build Master Dimensions for all of the fields with that as well, including the sample values. Now data modelers/designers can see the data and end users can see it all so they know they can trust the answers and act on them. Chef Qlik Dork and Chef Claude were really cooking in the kitchen for this one.
PS - This was the beginning of the application. See Video 5 - Show me How It Works above to see the final application and how it interacts with Snowflake Cortex Agent API for the full end user experience of awesomeness. I'm talking about the results of questions being displayed as charts, tables and users can see the SQL that was generated. The data returned is ingested into the Qlik Data Model so users can then filter to the records returned and see all of the details to answer their next 10 questions. What if they asked about big data tables that aren't loaded into Qlik Sense? No problem we go pull that data live.
The Show Me How to Build It videos for both series will refer to other resources. I thought about making you crawl on your desktop and squint in order to see the URLS and then make you hand type 50 characters from memory. Then I thought it's not going to be much fun for either of us since I wouldn't actually see you doing it. Fortunately for you I've included the needed resources below.
Calling Snowflake Cortex Agent API within Qlik Sense
Creating a REST API Connection for Snowflake Cortex Agent
"Let's make it simple" - one recipe at a time
Just like real chefs each of you has your own secret ingredients that make your Qlik work delicious and that people can recognize. Your secret sauce goes beyond just throwing random objects on the screen. It goes beyond just slapping a Select * of tables into your load script and data model. It goes beyond making up new expressions in charts.
But your secret sauce takes time to prepare.
I know mine sure does.
Because it's manual. If I want things to be a certain way, or look a certain way I have to spend the time. This post and the video are to encourage you that when you enter the Claude chat sessions, you don't have to go alone.
You can predefine your secret sauce so that's always at the ready. Taking the great meal you have in your head, that Chef Claude helps you prepare, and have your secret sauce added to it.
Your secret sauce prevents Claude from doing what you just asked because you were in a hurry.
Your secret sauce provides the boundaries in which Claude will work, and ensure that what you generate will follow your approved standard.
While you don't want Claude to follow a "just sling it on the screen" methodology, you also don't want to have to do this each and every time:
Like any new person that might join your team ... you want to have Claude follow your 99 explicit - gold standard guidelines, without having to type them in. That means taking them to teach him the skills needed to ensure your standard is followed. Just as you would teach Fred, Sally, Suzie or Bob.
Each of the following skills represents a critical ingredient in the art of making complex analytics deliciously simple. No matter how much of a rush is on you.
Building reusable, governed analytics components
This skill governs how master dimensions and master measures are created, documented, and maintained in Qlik applications. It establishes a governance framework that treats master items as reusable, governed analytics building blocks that must be thoroughly documented with descriptions, tags, business context, and calculation logic. The skill defines when to create master items versus ad-hoc fields, emphasizes rich metadata that helps users understand what they're using, and establishes naming conventions that make items discoverable. It covers expression patterns for measures including proper aggregation contexts, handles dimension creation with drill-down hierarchies, and ensures that master items follow the same field naming standards as the load script. The skill transforms master items from simple field lists into a governed analytics vocabulary that enforces consistency across all sheets and visualizations while making it easier for users to self-serve.
🎯 Chef's Philosophy: Master items are your mise en place - prepare once, use everywhere. Good governance starts with well-documented, consistently named building blocks that anyone can understand and reuse. 📊
Audience-driven dashboard design methodology
This skill implements Qlik Dork's audience-driven workflow methodology for building Qlik sheets and dashboards. It starts by identifying the audience type (Financial, Clinical, Operations, or other domain-specific roles) and transforms metrics to match that audience's motivation and mental model. The skill follows a structured workflow: audience identification → metric transformation → context parameter collection → template selection → sheet building using a Story→Data→Visuals approach. It emphasizes that different audiences need the same data presented differently based on their decision-making context and priorities. The skill includes template selectors for common use cases, design patterns for effective visualizations, and ensures that dashboards tell a clear story rather than just dumping data on the screen. It transforms sheet creation from "what charts should I add?" into a strategic design process that starts with understanding who needs to make what decisions and works backward from there.
Standardized data loading patterns
This skill establishes the foundational rules for generating Qlik load scripts that connect to Snowflake and transform data correctly. It mandates a critical "stop and ask first" workflow - you must gather information about the audience, data grain, required fields, and business context before writing any code. The skill defines specific syntax patterns including the Snowflake connection format using LIB CONNECT, the preceding LOAD pattern for transformations, and strict field naming conventions using table prefixes (like fct_adm_admission_id). It covers date handling standards using Floor() for clean date fields, calendar key creation as integers, and proper table aliasing with square brackets. The skill also includes YAML-based code generation patterns, validation workflows using qlik_create_data_object to verify field existence, and emphasizes the "one wrong decimal = lost trust" philosophy where accuracy always trumps speed.
Quality control before deployment
This skill provides a secondary validation process to verify calculations are correct before declaring work complete. It acts as a quality control checkpoint that prevents common mistakes like Sum() versus Count() errors from reaching end users. The skill defines specific validation workflows to check measure calculations, dimension values, filter logic, and data model relationships. It establishes a systematic review process that catches errors before they erode trust, reinforcing the "one wrong decimal = lost trust" philosophy. The skill triggers after creating any calculated measures, KPIs, or complex expressions, serving as the final quality gate before presenting work to users. It's essentially a "trust but verify" framework that ensures analytical accuracy through structured verification steps rather than hoping you got it right the first time.
Critical thinking framework as Chief Question Officer
This skill establishes rules for how to answer data analysis questions in a way that promotes critical thinking and data literacy. It requires transparency about assumptions, defaults, and data interpretation choices rather than just providing answers. The skill mandates explaining the "why" behind analytical decisions - why certain filters were applied, why specific aggregations were chosen, why particular time periods were used. It transforms simple question-answering into an educational process where users learn to think more critically about their own data queries. The skill prevents the "black box" problem where users get answers without understanding the logic behind them, and instead builds their analytical capabilities by making the reasoning transparent. It's designed to teach users to ask better questions rather than just accepting whatever answer comes back.
Professional branding and delivery standards
A simple, straightforward skill that ensures Claude uses official Qlik brand colors when creating PowerPoint presentations. This skill provides the exact RGB and hex values for all six Qlik brand colors (Green, Blue, Aqua, Blended Green, Fuscia, and Deep Purple) along with ready-to-use Python code snippets for python-pptx implementation.
Perfect for anyone who needs to create Qlik-branded presentations and wants consistent, accurate color usage every time. Just upload this skill to Claude, and it will automatically reference these colors when building your decks.
What's included:
No fluff, no complicated guidelines - just the colors you need to stay on-brand.
In this video I demonstrate how these skills turn the bland, into the sublime each and every time. Not only does the agentic nature of Claude working with Qlik MCP save you time, as you will see it can ensure that your gold standard is followed every single time.
Even though we both know there are occasions you don't follow your rules yourself due to time.
Snowflake recently released what it calls Snowflake Intelligence. It's their User Interface that enables users to directly ask questions of data. Under the covers their interface is interacting with a new Snowflake Cortex Agent API.
Qlik is an official launch partner with Snowflake for this exciting technology as we are able to call that Snowflake Cortex Agent API just like they do. Which means you are able to present visuals to aid with insights, while at the same time allowing end users to ask questions and then present the results that the Cortex Agent API returns.
The intention of this post, is to help you understand the nuances of the Snowflake Cortex Agent API.
Calling the Agent API is super easy. You simply use the REST Connector provided to you in Qlik Sense. Refer to this post to help you create a REST connector to your Snowflake Cortex Agent. Either Qlik Sense Enterprise on Windows or Qlik Talend Cloud Analytics. You will want to ensure you check the Allow "WITH CONNECTION" box so that you can change the body.
To get the REST connector to build a block of script for you, ensure that you set the Response type to CSV and set the Delimiter to be the Tab character.
Eventually you will modify your script to be something like the following where you set the Body to be the question your user wants to ask, rather than it be hardcoded. But who cares?
There is nothing special here and nothing worth writing about that I haven't already covered in other posts. The reason for this post isn't about the connection itself... it's about what the Snowflake Cortex Agent API returns.
Rather than returning a single response, it actually streams a series of events. Notice in the image above to load data from the connection what the results look like. It literally returns the entire "stream of consciousness" if you will, as it is working. Everything it does.
It would be an exercise in futility if I simply talked my way through how to handle a Streaming API in general, and especially how to handle this even stream from Snowflake Cortex Agent. So, while I won't be walking you through all elements of the connection, or how I build the body based on what the user asks as a question ... I do want you to be able to be hands on. The following image illustrates how I used my connection (that does work) to get the event stream and store into the QVD file that is attached to this post.
You will need to:
If you did all of these steps correctly you should be told that 487 rows of data were loaded.
Go to the Event Stream sheet and see all of those wonderful 487 rows that were returned when I called the Snowflake Cortex Agent with the question that I passed it.
Be sure and scroll through all of the rows to really appreciate how much information is returned. When you get to the bottom there are 2 rows that I really want you to focus on. You see all of the other events are simply appetizers for the main course we will focus on for the remainder of this post. They are merely events that let you know things are happening and then the stream says "Hey wake up now here is my official response" in rows 482 and 483.
Now what you need to do is row 483 so that the text box on the right will show you the full value that is returned for the response event.
I'm not going to lie ... the first time I saw that I was a little bit intimidated. Sure seemed to me like the wild west of JSON data. In fact ... I ended up writing a series of posts I called Taming the Wild JSON Data Frontier just to document the process I had to go through in parsing that beast. Be sure you read each of the posts that is part of this so that you have the chops as a data sheriff to deal with this incredible structure.
One thing you should know, if you don't already, is that JSON can be very compact like you see in the response. Which is great for exchanging/storing all of the data. But it is really really hard to understand. I highly recommend you take advantage of any online JSON Formatters that you can find. I use jsonformatter.org. You simply hand it the compact JSON structure, and ask it to format/beautify it .. and voila it becomes much more human readable.
{ I have attached the output in a text file that you can download and view for the remainder of this article if you don't want to take the time right now to actually copy and the beautify the response. }
But I digress the important part is that you now know that the RESPONSE event is the one you care about and that the DATA associated with the RESPONSE has a massive JSON structure that contains all of the information we need to present the response back to the user. So, let's dig in.
Go ahead and return to the load script editor and move the section named "Get the RESPONSE event data" up above the Exit script section so that it can actually be processed.
Before seeing the code you may have thought "There is no way I'm going to be able to magically figure out how to identify the data for the response event." But as usual, Qlik Sense provides some very easy transformations. Logically we want to only pull the data from the entire event stream, if the event before it is "event: response" so that's exactly what we ask to do by using the Previous() function. We don't care at all about the part of the column that has the phrase "data: " in it, so we simply throw it away.
Go ahead and reload the data, now that this section will run and when it completes check the preview window and sure enough ... we have exactly what we want in our JSON table.
If you look at the prettified view of the response data you will see that at the highest level it contains a field called content that is an array.
If you scroll all the way the pretty content you will see that it's actual an array of heterogeneous, or mixed type, objects. Meaning some of the array elements are thinking, some are tool_use, some are tool_result. And to make it worse the tool_result elements aren't even the same.
If that sounds nasty ... don't let it bother you. Again, the entire reason for that series of posts I've already written was to help walk you through all of the types of JSON data that will need to be parsed. To understand the next part of the code be sure to read as well as the posts it points you to.
Parsing: Heterogeneous (Mixed) JSON Arrays
Go back to the load script and move the "Mark the Content we care about" section above the Exit script section and reload the data.
Before I discuss the code, go ahead and preview the Content table to ensure you have the 9 different Content values that were in the array. One of the tool_use rows will have the Question_Record marked to yes and the tool_result record will have the Results_Record marked to Yes.
Logically we do a 2 part load. The first iterates through all of the elements in the content array and pulls out just that elements content. The preceding load that takes place simply uses an Index function to know if the word "charts" is contained in the record and marks a flag accordingly. If we parse a nested set of JSON values from the record and find the question value, then we set that flag accordingly. If you haven't already read the posts I've been begging you to read ... then stop and read them now. That's an order. 😉
The logical question you might have at this point, since I left you hanging, is why in the world I focused on flagging those particular rows of the content array? To understand imagine the end user asking a question of a magical black box that mysteriously just goes off and returns an answer. You've probably heard me say more than once "you can't act on data that you don't trust."
To that end, the Snowflake Cortex Agent API will return the question, as it got rephrased by it's generative AI and it also sends the result SQL that was generated. We just have to look for it in the pretty version of the response. Suddenly the mysterious black box, becomes more transparent. Which is exactly what I want to do ... report to the end user as well as audit the question and the sql.
The results flag is set because the Snowflake Cortex Agent API literally hands us the information we need to create a chart with the results. I'm not kidding. It literally gives us the title for the chart as well as the dimension and measure fields for the chart, then it gives us the values for them. You gotta love that.
Now that you understand what is returned and why I flagged it, let's look at how we pull all of that wonderful information out of what initially seemed like an undecipherable JSON mess. Go back to the load script editor and move the "Get the Question and the Results" section above the "Exit script" section and then reload the data.
We start building the Response table by simply reading the row in the Content table that has the Question_Record flag set to Yes. Getting the Question and the SQL statement to share to the end user is simply a matter of reading the nested JSON path for their values.
Then we need to add a few columns to the Response table, which we will get by reading the row in the Content table that has the Results_Record flag set to Yes. Again pulling the information we want is simply a matter of reading the nested JSON path for those values.
Now that you have reloaded the data to run this, and understand it ... it is time to check out the Preview panel for the Response table. We almost have exactly what we need to present to the end user. We have the Question_Asked, the SQL that was used within the Snowflake Cortex Agent, and we know the Dimension and Measure field names. Finally we have a JSON array of the values.
Now that I know you have read the posts I mentioned, I should be more precise: "Finally we have a JSON Array of Homogeneous Objects." Which is covered in the Parsing: JSON Array (Homogeneous Objects) post.
Go back to the load script editor and simply drag the "Parse out the Values" section above the "Exit script" section and reload the data.
The first thing we need to do is pull the DimensionField and MeasureField names into variables that we can refer to. All we need to do is use the Peek() function.
As you are familiar by now parsing a nested JSON structure is a simple matter of using the JsonGet() function with a pattern of:
Which is straightforward when you know the "field name." In the case of pulling the values out that we will need to present to the end user we don't know what they are. The very nature of what we are doing is asking the Snowflake Cortex Agent a question that the end user will give to us. It will then magically process that question and respond to us. Which is why we needed to extract the field names to variables. Now we simply iterate the array and parse the values by passing the variables.
Now let's check the work by looking at the preview for the Values table we just created. Unless I'm missing something you just rocked the world by converting an event stream of JSON structures into a structured table that is now contained in memory.
How cool would it be if ... Never mind that's crazy!
But it would be cool if we could ... It would be really hard.
Maybe we could so let's talk about it.
Since we do have this data in memory now it would be so cool if we could visualize on the screen for the end user. Right? Forget my event stream of consciousness in getting here .. but it's not really that hard. Go ahead and go to the "View the Results" sheet and you will see something magical.
Go ahead and edit the sheet so that you can see how I created that bar chart. Check that out I simply used those 2 variables that we created and did that hocus-pocus dollar sign expansion magic on them. You gotta love that.
Of course I am going to load data into Qlik Sense so that business users can gain insights at the speed of thought. But let's face it ... I'm lazy and didn't talk to every user about everything they would ever want to know about their data. As a result I didn't build every conceivable chart to show them the answers.
Invoking the Snowflake Cortex Agent lets the users ask questions. Questions that we might not have a chart for yet. Questions that might involve scenarios that would be over the users data literacy or training level to get at.
Oh sure I had fun doing all of the techno mumbo jumbo and sharing that with you. But by invoking it right inside a Qlik Sense dashboard I've now given business users the best of both worlds. Not only can we present their answer to them in a chart, since the values are in-memory it is associated to all of the other data. Meaning business users can interact with the values, all the other visuals will respond and naturally we can take advantage of that green/grey/white experience. While also providing the ability for them to see the aggregated answer they were looking for, but also immediately allowing them to see all of the other details they may need to follow up. Like details of the visits, provider names etc.
Use Case: We usually update the following in the QMC GUI. This causes problem at times when there is no RDP access is present to the server.
Steps do it in automated way without QMC GUI:
SELECT * from "Users";-----To know all the users
SELECT "SslBrowserCertificateThumbprint" from "ProxyServiceSettings";
SELECT "WebsocketCrossOriginWhiteListString" from "VirtualProxyConfigs";
The information in this article is provided as-is and will be used at your discretion. Depending on the tool(s) used, customization(s), and/or other factors, ongoing support on the solution below may not be provided by Qlik Support.
Environment
Organizations face rising pressure to deliver analytics-ready data rapidly, reliably, and at scale. While platforms like Qlik Talend Cloud (QTC) and GitHub offer powerful capabilities for Continuous Integration and Continuous Delivery (CI/CD), tools alone are not enough. High-performing data teams require the right project management discipline, data architecture, and team structure to ensure predictable, high-quality outcomes as complexity grows.
This document outlines best practices for preparing an organization, configuring GitHub, and structuring QTC environments to enable efficient, governed, and scalable data delivery. It also provides a detailed walkthrough of a multi-team Medallion-architecture project implemented across two Sprints.
Key Benefits of a Well-Designed CI/CD Framework
- Faster, more reliable delivery of analytics features
- Improved data quality through structured governance
- Higher team productivity and reduced rework
- Clean, high-quality data that accelerates AI and analytics adoption
The image above depicts a JSON Version 2 structured array. Rather than repeating the column/field names over and over in pairs with the data, they present the fields and the data separately.
When I first encountered this structure, I asked myself "How in the world am I supposed to read the data into a structured placeholder?" I couldn't find any type of JsonGet example where it said "read this data array and just use your esp to know what the field names are supposed to be.
With any problem like this where the answer doesn't seem obvious, my recommendation is to just get started with what you can achieve ... so I did. I started by separating out the information that I could with the basic JsonGet field/value pair syntax.
After that I had the information that would be needed, broken up into digestible pieces.
Then I started with the field names. Notice that it is simply a JSON Array of Homogenous Objects and we already know how to deal with those by sprinkling on a little of the Qlik iteration magic:
Voila ... we have a table of the FieldValues so we know the names of each of our columns.
That was the easy part. I still wasn't sure how in the world I create a table where those were the columns and the data values would be the ... well ... the data values. So, I tried to visualize what I was looking for by adding the following as comments in the code itself as a reminder/muse.
SHERIFF_BADGE, TOTAL_BULLETS_USED
1307919, 3221
1617792, 2690
Then a crazy notion hit me ... that looks exactly like what I would do for an INLINE table.
EXACTLY LIKE AN INLINE TABLE.
So .... why not build it as an inline table?
What I wanted was something like this ...
Obviously I needed to build the Header variable first. A little housekeeping first to set some variables then I just needed to loop through however many columns/fields the structured array might have. In my real case, the number of fields was more than 2, but I shortened them to help you track the solution. The logic works regardless of the number. If it is the first field, then set the vCortexHeader to the name of the first field. If it is not the first field, then update the vCortexHeader so it equals the previous value, and add a comma, then add the name of the next field.
Voila ... the header for my soon to be inline table of values.
At the beginning of this post I showed what the ResultData looked like as part of the overall Response table that was constructed using the simple JsonGet function. I've expanded here so you can focus on just it ... notice that other than some extraneous characters it is literally in the format we need for an INLINE table.
If you take out the open/close square brackets "[" "]" and the double quotes ... the data is right there. We already know from previous posts how to use the JsonGet function to get row 1 (which is 0 offset) and get row x ....
So, then it is just a matter of removing those double quotes and square brackets:
Putting it all together we pretty much do what we did with the field names, except this time we added carriage return line feed characters before rows 2 through x:
And we ended up a vDataValues variable that looks like this:
If you are anything like me you don't like to assume anything, and you are a visual or experiential learner. So, go ahead and download the attached WildDataFrontier.qvf that is attached, upload it to your environment and open the load script.
{ Notice there are multiple sections. Each of them will pertain to a separate article and for this article the section named "7 - Structured Array" is the one you want to have at the top of your script. }
Edit line 49 that had 2 data values and a third data value , ("7777777", "777") and modify the numRows value from 2 to 3 like this:
'{"table": {"result_set": { "data": (("1307919","3221"), ("1617792","2690"), ("7777777", "777")), "resultSetMetaData": {"format": "jsonv2","numRows": 3, "rowType": ({"length": 0,"name": "SHERIFF_BADGE","nullable": true,"precision": 38,"scale": 0,"type": "fixed"},{"length": 0, "name": "TOTAL_BULLETS_USED", "nullable": false,"precision": 18, "scale": 0, "type": "fixed"} ) }, "title": "Top 2 Sheriffs for number of bullets used"} }}'
Since this solution involved a lot of variables, feel free to use the Debug mode and set breakpoints where you want them so that you can see the values as they are set. Or simply just reload the data after your changes and check out the values in each of the tables created so you can confirm what was done.
Definition:
Returns the number of dimension columns that have non-aggregation content. i.e. do not contain partial sums or collapsed aggregates.
A typical use is in attribute expressions, when you want to apply different cell formatting depending on aggregation level of data.
This function is only available in charts. For all chart types except pivot table it will return the number of dimensions in all rows except the total, which will be 0.
What does it mean?
We have Table with 4 dimensions(columns): Product,Category,Type,Sales
![]() |
Now we want to create Pivot Table by using those Dimensions.
We are going to use only 3 of them(Product,Category,Type) and use 4th(Sales) in our expression.
The result is shown below:

This Pivot Table has 3 dimensions so its maximum dimensionality is 3.
For better understating please see table below.
The function is used to show on which dimensionality level each of the Pivot Table row is:
![]() |

'Sugar' has dimensionality of 1 which is Total for that 'Product'.
'Salt' has dimensionality of 2 which is Total for each 'Category' of that 'Product'.
'Oil' has dimensionality of 3 which is single value for each 'Type' of the 'Product's' 'Category'.
So then more Dimension we use the greater dimensionality of our Pivot Table is.
Practical use:
1) To show the level of dimensionality:

Expression:
if(Dimensionality()=1 ,RGB(151,255,255),if(Dimensionality()=2 ,RGB(0,238,0),if(Dimensionality()=3,RGB(255,130,171))))
![]() |
2) Highlight background of rows which on each level fall into certain condition:
Expression:
if(Dimensionality()=1 and sum(Sales)<150,RGB(151,255,255),if(Dimensionality()=2 and sum(Sales)<=20,RGB(0,238,0),if(Dimensionality()=3 and Sum(Sales)<=20,RGB(255,130,171))))
| LEVEL1 --> Values <140 | LEVEL 2 --> Values <=20 | LEVEL 3 --> Values <=20 |
|---|---|---|
![]() | ![]() | ![]() |
Otherwise you will need to make changes the this path - [Dimensionality.xlsx]
Directory;
LOAD Product,
Category,
Type,
Sales
FROM
[Dimensionality.xlsx]
(ooxml, embedded labels, table is Sheet1);
Felling Qlingry?
About
What is it? What does it do?
This is a tool that I use to quick check shared file content and do some usual maintenance job on it. After playing with colleagues for a while, I think it'd be nice to share with the community and get some feedback about if/how I should proceed with this personal project.
This tool is a very simple one, it just can open both legacy ".Shared" and new ".TShared" formats of QlikView Shared File, show helpful info in it, and provide some very basic operations on shared objects (currently I have only add support for Bookmark because it's the most commonly used one day-today)
Why another Shared File Viewer?
There has been a Shared File Viewer already for quite a while (in PowerTools package)
The limitation of the existing one is it can't open the new "TShared" format that was introduced lately into QlikView. So if one wants to view new format, they have to convert "Tshared" to "Shared" first and convert it back afterwards, which is really annoying especially the shared file is *big*.
Another limitation for the current one is it provides small subset info of Shared file content and doesn't embed much shared file functions (cleaning, filtering) in it because its development toolchain is out of dated.
Lastly, I found it's not easy to run a Shared File Cleaner without GUI and want something more intuitive.
In short the legacy shared file viewer is inconvenient to use(to me at least 😅 ), especially when it comes to new "TShared" format.
So i think why not just write another tool myself to meet my need - here it comes.
Release Note
Current Stable Release: 0.2
You can find it in the attachment, where the zip file simply contains an exe file that you can run on Windows.
Features:
Hopefully you have time to download and play with it, and, most importantly, give me some feedback about how you think of it, and what other functions you want to include in it in future.
NOTE:
this tool is currently under preview only. and please be CAUTIOUS if you use it with production Shared files. I know the shared content is critically important, so make sure you have backup before touching any Shared Files.
If you look closely the following image is very similar to the image above that I used for the Heterogeneous (Mixed) JSON Objects post. The difference is that instead of being multiple rows, it's a single JSON block which contains multiple rows as an array.
It also looks very close to the image I used for the JSON Arrays (Homogeneous Objects) post above that one. The difference is that in the case of that post, the array was easy to understand as just being multiple rows for the same entity. In this case, each of the "rows" in our array our for different entities.
Since the previous post was so long, and you definitely did the homework for both of the referenced posts I'm going to keep this one short. We are simply going to walk through how you can convert the array of data into a table of data that can be parsed in a flexible way.
If you haven't already done so for a previous post, go ahead and download the WildDataFrontier.qvf that is attached, uploaded it to your environment and open the load script.
{ Notice there are multiple sections. Each of them will pertain to a separate article and for this article the section named "6 - JSON Array: Heterogenous Mixed Objects" is the one you want to have at the top of your script for this post. }
Same basic type of inline load that you've seen in previous posts. Unlike the previous post where the data was multiple rows for a table, I've returned to simulating a single JSON block return. You can see the pretty view of the data, as well as the single row view.
Feel free to scroll right down to Step 1 in the preceding load process. You will see that all I do is use the IterNo() function to iterate the array, and simply create a table like we started with in the previous post.
If you comment out Steps 2-4 in the load script so that Step 1 is the only active you can see the results in the preview tab.
Just like we did in the previous post, feel free to simply uncomment 1 Step at a time in the load process, until you return to having all 4 steps being active. At that point if you take a look at the preview and will see that you have in fact parsed out all the values for all of the fields for every mixed entity JSON block in the array.
This post is part of a series aimed at ensuring you have the tools needed to Tame with Wild West Data Frontier, I mean Tame the Wild JSON Data Frontier you may be facing.
Posts:
As you look at the image above you will notice multiple JSON objects. In the previous post Parsing: Heterogeneous (Mixed) JSON Objects Fixed Manner we walked through how we could easily handle this type of data in a fixed way. All you need to do is know every single field:value pair that will ever come to you.
If you haven't already read that previous post be sure to do so and go through the practice. You need to understand that while easy to maintain for new fields that come across ... the thought of knowing every single field:value pair that will ever come to you is kind of daunting. Right?
I kind of slipped it in there but:
This post will walk you through the same data that we dealt with in the previous post, but will allow you to read all of the values for all fields that exist now, and those that might come riding in to town tomorrow.
If you haven't already done so for a previous post, go ahead and download the WildDataFrontier.qvf that is attached, uploaded it to your environment and open the load script.
{ Notice there are multiple sections. Each of them will pertain to a separate article and for this article the section named "5 - Mixed JSON: Flexible Values" is the one you want to have at the top of your script for this post. }
Same basic data as the previous post, but I've also included the 4'th row of data that I asked you to do as part of the practice. The flexibility that you need as a data sheriff in this JSON Wild Data Frontier is going to be handled by the incredible flexibility available to you in how Qlik Sense can load and transform that data.
In previous posts we've simply done a resident load to transform the data directly in 1 step using that ever so flexible JsonGet() function. In this post our transformation is going to do a few things that may be new to you. Be sure to click the links for each to get at least an introduction to them:
Once you have the application open and have moved section "5 - Mixed JSON: Flexible Values" to the top of the script go ahead and load the data. We are going to start by previewing the data for the finished product and then work backwards to understand how it was accomplished. If you look at the preview screen you will notice something odd: You have the Mixed_JSON table that was built by the inline load part of the script, then you have a whole bunch of other tables. One for each of the fields that is part of the data.
Go ahead and select the Information_Flexible.bounty table so you can preview what data is in that table.
If you walk each of the tables you will notice that each table contains the value for that given field. If you look at the Data Model Viewer you will see that every table is associated based on the row number. Which means in any chart that you visualize you will easily be able to present the values for all of the fields.
Add the following row of data to the inline load script, and then reload the data. Notice that it is brand new type and contains a value for a new field called bowtie_color, as well as values for two existing fields text and bounty.
5, '{ "type": "dork_mail", "bowtie_color": "Qlik Green", "text" : "Qlik Dork is coming to town", "bounty": 1000000 }'
What do you know? Our application is indeed flexible and it has created a new table called Information_Flexible:bowtie_color
Feel free to check out the values for it (as well as the values for the text and bounty tables) to see that indeed we have created structure from this unstructured heterogeneous JSON mess.
I tried my best to include comments in the load script which will hopefully aid in your learning. But I will admit sometimes it's hard to mentally walk through a script like this, even with comments, when there are multiple preceding loads like this. So, let's take this one step at a time and walk through what happens for each of the preceding load steps. To do this highlight rows 20-43 (from the Generic Load line until before Step1) and then comment those rows out and then reload the data.
If you look at the preview window now you will see that you have a single Information_Flexible table. As described in my comments, all step 1 did was to remove the { } characters out of the JSON block and we simply have some field:value pairs.
Now uncomment lines 41-43 and reload the data so that we are running the first preceding load statement.
If you look at the preview for the Information_Flexible table you will see something interesting. We have multiple rows in the table for each of the original rows. As the comments indicated the SubField() function has done that. Each of our field:value pairs has been put into it's own row:
This is getting fun. So let's keep going. Now uncomment rows 31-34 and reload the data again.
As you look at the preview again for the Information_Flexible table you will notice something cool. The field:value JSON pairs, have been turned into structured Field and Value fields.
If we could created a table for each of the different Field values and store those Value values in it we would really be in business. Now I remember that's exactly what a Generic Load does.
Go ahead an uncomment lines 20-24 so that we can once again reload the data, and this time return to our starting point.
Like many things I have written in the past ... this solution isn't really all that complicated, but understanding how multiple Qlik Sense load transformations work together can be. Hopefully, you now feel confident that you have an example load script that you can use to flexibly parse out any nasty JSON Heterogeneous data that those varmints throw at you. Plus you realize how easy you can make it for yourself to simply comment out preceding load steps so that you can get a picture of what is occurring each and every step of the way. After all, not everyone documents their code so thoroughly. 🤠
This post is part of a series aimed at ensuring you have the tools needed to Tame with Wild West Data Frontier, I mean Tame the Wild JSON Data Frontier you may be facing.
Posts:
As you look at the image above you will notice multiple JSON objects. In previous articles we've dealt with single JSON objects. Now we need to deal with the fact that the file has multiple rows, or we called an API that returns as a streaming set of values, rather than just a single response.
Like previous posts these JSON objects have field:value pairs that we can certainly parse out, but each record seems to be different than the previous. There is a type field, but the value for type seems to indicate a different entity type all together.
In previous posts I tried to help you relate the unstructured JSON block to it's SQL counterpart.
When I presented Parsing: Flat JSON - Field Value Pairs we discussed how a JSON structure was similar to a single row/record of a table.
When I presented Parsing: Nested JSON Objects we discussed how a nested JSON structure was similar to a select clause that joined data from multiple tables.
When I presented Parsing: JSON Array (Homogeneous Objects) we talked about how the array structure was similar to returning multiple rows from a table, rather than a single record.
In SQL terms this would be like reading a table and if the value said report, then you would join to a table that stored values for a report. If the value said wanted_poster, then you would join to a table that stored values for a wanted_poster. If the value said telegraph, then you would join to a table that stored values for a telegraph.
Wait, that's not really how SQL works is it? You don't have columns that might be a foreign key to any of 3 different (or more) other tables. Complete random thought here ... What if this kind of flexibility is exactly what makes JSON a great data exchange format?
Enough random brainstorming ... let's get back to parsing out this data.
Qlik Sense provides a very easy to understand function called JsonGet that we will use to get the values for the fields contained in this homogeneous JSON data. It wouldn't matter to you if I asked you to tell me the text contained in the report, or if I asked you for the bounty amount for Billy the Byte's wanted_poster, or if I asked you to tell me the message contained in the telegraph. You would simply find the fields and tell me the value.
If you are anything like me I'm sure you worried/fretted/shook in your boots that the function would return an error if a value you asked for didn't exist. But stand tall data sheriff, because it will simply return null if you ask for a field in the JSON structure that doesn't exist.
Thus the easiest way for you to read all of this homogeneous data is simply call the JsonGet function for all of the field types from any of the entity types. It's that simple:
If you are anything like me you don't like to assume anything, and you are a visual or experiential learner. So, go ahead and download the attached WildDataFrontier.qvf that is attached, upload it to your environment and open the load script.
{ Notice there are multiple sections. Each of them will pertain to a separate article and for this article the section named "4 - Mixed JSON: Fixed Values" is the one you want to have at the top of your script for this post. }
Each section of the application begins with preparing the JSON object(s) we will be parsing. Previous posts loaded a single JSON block, but in this case we have a simple inline table with multiple rows. To help you more deeply understand this set of data imagine that you are reading data from a source with multiple rows, or an API keeps responding with results and each row you read you use the RowNo() function to create an ID for that row and stores it in a field called RowNumber.
Before making any changes to the data be sure to load the data for this section and check out the preview. I want to ensure that you are confident in the fact that jail cells aren't going to blow up when you call the JsonGet() function for a field:value pair and that row of data doesn't exist.
Now go ahead and add a new row to the inline table:
4, '{ "type": "homestead", "coordinates": "Some Where", "amount" : 7777 }'
Before actually adding any new JsonGet() function calls in the script just go ahead and load the data so that you can see that it has no problem reading your new row and will display the homestead type in the preview window for you. Now go ahead and add the 2 lines of code you need to handle the 2 new fields for the homestead type: coordinates and amount.
This post is part of a series aimed at ensuring you have the tools needed to Tame with Wild West Data Frontier, I mean Tame the Wild JSON Data Frontier you may be facing.
Posts:
The image depicts what is called a JSON Array and to be precise its a JSON Array of Homogeneous Objects. I'm sure you are no greenhorn at this point in my series on parsing JSON. If I said tell me the name who is the owner of the third saloon in our town, I'm sure you would immediately reply "Miss Ada."
This post is going to focus on how you came to your answer. Guess what function Qlik Sense provides that will allow you to parse that answer out of this unstructured textual array just like you were able to do with your human mind?
If you guessed it was the same JsonGet() function covered in the previous posts on Parsing: Flat JSON - Field Value Pairs and Parsing: Nested JSON Objects you are 100% correct.
The how is where it gets interesting. In the post Parsing: Nested JSON Objects, I brought up the concept that we needed to qualify the path to our field:value pair and we worked through how accessing the name field for an entity called sheriff was accomplished by doing this:
I told you that the notation was essentially /entity/field. But you somehow managed to astound me with your ability to parse the opening image and provide the answer to my question about the third saloon. You were able to answer because you automatically, in your mind, altered that notation to be something like /entity/INDEX/field.
The essential part of this post is the fact that the INDEX value starts at an offset of 0. Meaning the first value has an index of 0.
Assume that the JSON structure from the image at the start of the post is inside a field called Array that is in a table called JSON_Array. The following code would pull out values from the array. Notice that we pull the name, owner and capacity field values for the first saloon. We only pull the name for the second saloon. The last line simply pulls the entire JSON block, the "row" itself for the final saloon in our array. Maybe it will deal with parsing it later. It's entirely possible that I purposely did that in this basic post to prepare for a much much deeper post in the future where that is in fact what I do. I pull out the block and deal with it in a different part of the code.
If you are anything like me you don't like to assume anything, and you are a visual or experiential learner. So, go ahead and download the attached WildDataFrontier.qvf that is attached, upload it to your environment and open the load script.
{ Notice there are multiple sections. Each of them will pertain to a separate article and for this article the section named "3 - JSON Array: Homogeneous Objects" is the one you want to have at the top of your script for this post. }
Each section will begin with a common pattern. I will show you the prettified version of the JSON structure. Meaning it's indented, and thus easy to read. Then I will remove the line feeds so that the JSON structure being covered is a single line that we deal with as a single string for transformation within Qlik Sense. That single text/block is what API's will return or what you will read out of textual fields that contain JSON blocks from RDMBS systems. In the other articles it was easy for me to simply create an inline table that contained that JSON Block. But for an array we have a special issue to deal with ... those brackets the define the beginning and end of the array "[" "]"
Those 2 characters are special and define the beginning and end of our inline statement and thus they cause a problem. Notice that I've simply duplicated the characters [[ and ]] in my line of code, so that the inline table reads it in, and then I do a preceding load to pull the duplicate brackets right back out.
Before you worry about making any changes, simply load the script as is and go to the Preview window and see the JSON_Array table. You will then see that our Array field is formatted as needed to be a proper JSON Array.
When I began the article I asked you to focus on the third saloon. Then I showed you the code needed to parse out and get the same response from the code that you were able to do mentally. 3 entries fit my example case for this post because ... I needed to parse out the array 3 different ways. I wanted all values for a "row", only 1 value for a "row" and just the JSON for the final "row."
But I don't want to leave you hanging there. Some rascally gunslinger is bound to head into your town with 5 rows. Or 10 rows. Or 20 rows. The last thing I want you to do is sit there and hand code a hard coded limit and then tell the good folks in your town that as Data Sheriff you will happily give them insights from the first 20 values. They just might run you out of town.
Never fear my friend, Qlik Sense provides a fantastic function call that will actually iterate through as many rows as needed. The function is called IterNo() and as the help makes really clear, the only useful purpose for the IterNo() function is when used with a While condition. In other words ... you need a way to ensure we end the recursion.
So here is how we can utilize it to parse our JSON Array of Homogeneous Objects. Rather than just reading the resident JSON_Array like we did above when hardcoding the parsing, we simply take advantage of the IterNo() function and then combine it with another Qlik Sense function called IsJson().
Resident JSON_Array
WHILE IsJson(JsonGet(Array, '/saloons/' & (IterNo()-1)));
The IterNo() function will start at 1 and then keep iterating, until you tell it to stop. Easy breasy.
The IsJson() function will say "Yep you have properly a properly formatted JSON structure" or "Sorry partner you drew a bad hand."
Thus ... as the IterNo() function iterates through we will be handling the following:
The first 3 iterations will succeed, but the 4'th iteration will return nothing and thus end our while loop. As you will see in the script in the WildDataFrontier application, we can then simply refer to the iteration number (will remain constant for the row being processed) and pull out our values for every row that is returned in the array. Whether it be 1, 10, 20, 100 or more.
Thus when we preview the flat/hardcoded method the results are usable
But when we preview this iterative approach the results are usable today, tomorrow and next year. So, you can remain as the Data Sheriff.
This post is part of a series aimed at ensuring you have the tools needed to Tame with Wild West Data Frontier, I mean Tame the Wild JSON Data Frontier you may be facing.
Posts: