Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Forums for Qlik Data Integration solutions. Ask questions, join discussions, find solutions, and access documentation and resources
Qlik Gallery is meant to encourage Qlikkies everywhere to share their progress – from a first Qlik app – to a favorite Qlik app – and everything in-between.
Get started on Qlik Community, find How-To documents, and join general non-product related discussions.
Direct links to other resources within the Qlik ecosystem. We suggest you bookmark this page.
Qlik gives qualified university students, educators, and researchers free Qlik software and resources to prepare students for the data-driven workplace.
Managing users and groups just got easier. With our new Custom Groups capability, you now have more flexibility and control to handle user and group relationships directly in Qlik Cloud, without needing to rely on an Identity Provider (IdP).
Video: Introducing ‘Custom Groups’ to Qlik Cloud Analytics
What’s New?
No more back and forth with external systems. Now, you can manage users and their group memberships directly in Qlik Cloud. Everything is in one place for easy access control.
Assign users to groups in just a few clicks. Control both administrative access and feature permissions without hassle.
Custom Groups adapts to your unique structure, making access management easier and more aligned with the way your organization works.
Use cases:
Manage group memberships within Qlik Cloud and simplify integrations with third-party apps, without depending on IdP-provided groups.
Gain full control over user and group assignments, without being tied to Active Directory or other IdP systems. Adapt faster to changing needs.
Key Benefits:
Custom Groups makes managing access in Qlik Cloud simple and efficient, no matter the size or type of your organization. It saves you time, reduces complexity, and gives you full control over how users and groups are managed.
Learn more here:
The File Connector for the Data Gateway provides a key capability to bridge on-premises file data to Qlik Cloud Analytics. This new connector can help on-premises analytic customers transition to cloud-based analytics as it enables them to easily access and leverage existing on-premises file data, especially QVDs, in Qlik Cloud Analytics. With familiar file access capabilities, the File Connector can also serve as a more robust replacement to the Qlik Data Transfer tool.
Customers can use the File Connector to access network drives and file systems via the Gateway server and can preview a file using read-only access to ensure data security. The File Connector can then load firewalled data files, of any currently supported file type, directly into Qlik Cloud.
The File Connector also utilizes predefined connection definitions for quick setup and supports wildcards when selecting files and folders.
Learn more here: Qlik Help: File Connector | SaaS in 60
The Qlik Data Gateway - Direct Access allows Qlik Sense SaaS applications to securely access behind the firewall data, over a strictly outbound, encrypted, and mutually authenticated connection.
We recently released Direct Access gateway 1.7.0 and 1.71. 1.7.0 introduced the File Connector mentioned above and 1.7.1 includes the integration of a REST Connector via the gateway. It has the exact same capabilities as the REST Connector within Qlik Cloud, but it also provides access to sources based on REST APIs residing on-premises (behind a firewall). We recommend that you use this REST Connector instead of the Qlik Data Transfer tool.
Qlik Answers knowledge bases now support Google Drive and OneDrive connections as data sources. You can find more information about creating knowledge bases here.
The Snowflake target connector for data replication and data pipelines now supports configuration of advanced (additional) ODBC and JDBC connection properties. This allows users to have fine-grained control over connection definitions beyond standard parameters, including adding properties such as Role, Secondary Role, and more.
You can find more information about these additional connection properties here.
Qlik Answers - This Qlik-native connector that enables the creation of data sources in knowledge bases using existing data connections. It also allows users to interact with assistants by asking questions related to the data source and receive answers based on the existing data. This blog discusses how to get started.
Updated Connectors
Edited December 5th: identified upgrades leading to complications with extensions
Edited December 6th: added workaround for extension complication
Edited December 10th: added CVEs (CVE-2024-55579 and CVE-2024-55580)
Edited December 12th, noon CET: added new patch versions and visualization and extension fix details; previous patches were removed from the download site
Hello Qlik Users,
New patches have been made available and have replaced the original six releases. They include the original security fixes (CVE-2024-55579 and CVE-2024-55580) as well as QB-30633 to resolve the extension and visualization defect.
If you continue to experience issues with extensions or visualizations, see QB-30633: Visualizations and Extensions not loading after applying patch.
Security issues in Qlik Sense Enterprise for Windows have been identified, and patches have been made available. Details can be found in Security Bulletin High Severity Security fixes for Qlik Sense Enterprise for Windows (CVE-2024-55579 and CVE-2024-55580).
Today, we have released six service releases across the latest versions of Qlik Sense to patch the reported issue. All versions of Qlik Sense Enterprise for Windows prior to and including these releases are impacted:
No workarounds can be provided. Customers should upgrade Qlik Sense Enterprise for Windows to a version containing fixes for these issues. November 2024 IR, released on the 26th of November, contains the fix as well.
This issue only impacts Qlik Sense Enterprise for Windows. Other Qlik products including Qlik Cloud and QlikView are NOT impacted.
All Qlik software can be downloaded from our official Qlik Download page (customer login required). Follow best practices when upgrading Qlik Sense.
The information in this post and Security Bulletin High Severity Security fixes for Qlik Sense Enterprise for Windows (CVE-2024-55579 and CVE-2024-55580) are disclosed in accordance with our published Security and Vulnerability Policy.
The Security Notice label is used to notify customers about security patches and upgrades that require a customer’s action. Please subscribe to the ‘Security Notice’ label to be notified of future updates.
Thank you for choosing Qlik,
Qlik Global Support
This type of question is common in all types of business intelligence. I say “type of question” since it appears in many different forms: Sometimes it concerns products, but it can just as well concern any dimension, e.g. customer, supplier, sales person, etc. Further, here the question was about turnover, but it can just as well be e.g. number of support cases, or number of defect deliveries, etc.
It is called Pareto analysis or ABC analysis and I have already written a blog post on this topic. However, in the previous post I only explained how to create a measure which showed the Pareto class. I never showed how to create a dimension based on a Pareto classification – simply because it wasn’t possible.
But now it is.
But first things first. The logic for a Pareto analysis is that you first sort the products according to their sales numbers, then accumulate the numbers, and finally calculate the accumulated measure as a percentage of the total. The products contributing to the first 80% are your best, your “A” products. The next 10% are your “B” products, and the last 10% are your “C” products. In the above graph, these classes are shown as colors on the bars.
The previous post shows how this can be done in a chart measure using the Above() function. However, if you use the same logic, but instead inside a sorted Aggr() function, you can achieve the same thing without relying on the chart sort order. The sorted Aggr() function is a fairly recent innovation, and you can read more about it here.
The sorting is needed to calculate the proper accumulated percentages, which will give you the Pareto classes. So if you want to classify your products, the new expression to use is
=Aggr(
If(Rangesum(Above(Sum({1} Sales)/Sum({1} total Sales),1,RowNo()))<0.8, 'A',
If(Rangesum(Above(Sum({1} Sales)/Sum({1} total Sales),1,RowNo()))<0.9, 'B',
'C')),
(Product,(=Sum({1} Sales),Desc))
)
The first parameter of the Aggr() – the nested If()-functions – is in principle the same as the measure in the previous post. Look there for an explanation.
The second parameter of the Aggr(), the inner dimension, contains the magic of the sorted Aggr():
(Product,(=Sum({1} Sales),Desc))
This structured parameter specifies that the field Product should be used as dimension, and its values should be sorted descending according to Sum({1} Sales). Note the equals sign. This is necessary if you want to sort by expression.
So the Products inside the Aggr() will be sorted descending, and for each Product the accumulated relative sales in percent will be calculated, which in turn is used to determine the Pareto classes.
The set analysis {1} is necessary if you want the classification to be independent of the made selection. Without it, the classification will change every time the selection changes. But perhaps a better alternative is to use {$<Product= >}. Then a selection in Product (or in the Pareto class itself) will not affect the classification, but all other selections will.
The expression can be used either as dimension in a chart, or in a list box. Below I have used the Pareto class as first dimension in a pivot table.
If you use this expression in a list box, you can directly select the Pareto class you want to look at.
The other measures in the pivot table are the exclusive and inclusive accumulated relative sales, respectively. I.e. the lower and upper bounds of the product sales share:
Exclusive accumulated relative sales (lower bound):
=Min(Aggr(
Rangesum(Above(Sum({1} Sales)/Sum({1} total Sales),1,RowNo())),
(Product,(=Sum({1} Sales),Desc))
))
Inclusive accumulated relative sales (upper bound):
=Max(Aggr(
Rangesum(Above(Sum({1} Sales)/Sum({1} total Sales),0,RowNo())),
(Product,(=Sum({1} Sales),Desc))
))
Good luck in creating your Pareto dimension!
Further reading related to this topic:
We are thrilled to announce that the new Qlik Learning will launch on February 17, 2025.
What is coming?
What can you expect with the new Qlik Learning experience:
What do we recommend to get prepared?
While Qlik is excited about this transition, there are actions we recommend:
There will be downtime while we prepare the new Qlik Learning experience for you. Downtime window starts at 8:00am ET on February 14 and ends on February 16 at 7:00pm ET. During this time, access to the current platform will be unavailable.
Qlik Continuous Classroom users:
If you are in the middle of completing a course, we recommend you complete it ahead of the new Qlik Learning launch, so your completion data is transferred, and progress is not lost.
Any Achievement or qualifications badges for the 2019, 2020, and 2021 Business Analyst or Data Architect will not be migrated into the new Qlik Learning, so we recommend downloading and sharing these using your Badgr backpack; also see the Sharing Badges on Social Sites document.
Check out the Qlik Learning FAQ we’ve prepared for you.
Talend Academy users:
You will log into the new Qlik Learning with your Qlik account. Don’t have an account? Sign up for a Qlik Account ahead of the launch using the same email address you use on Talend Academy.
Check out the Qlik Learning FAQ we’ve prepared for you.
Reach out to Qlik Learning at education@qlik.com if you have any questions. We greatly appreciate your patience as we work to enrich your learning experience.
Stay tuned for exciting learning updates!
*Important note: While we are confident on the February 17, 2025, launch date, please note there is always a possibility of adjustments. We will keep you informed promptly should any changes occur.
We are thrilled to announce that the new Qlik Learning is now live and ready for you! It is a single, integrated learning platform designed to enhance your learning experience and help you get the most out of Qlik.
An Unlimited Qlik Learning subscription is designed to energize your learning experience, accelerate your success, and help you grow your skills and expertise throughout your career.
What can you expect with the new Qlik Learning experience?
How do you get started?
To get started, simply log in to Qlik Learning with your Qlik account (Don't have an account? Sign up) and complete the short Getting Started course. This will unlock the full range of opportunities available to you. Additionally, after finishing the course, you'll earn a digital badge that you can showcase within your network!
Check out the Qlik Learning FAQs we’ve prepared for you, and reach out to Qlik Learning at education@qlik.com if you have any additional questions.
We can’t wait to hear about your experiences and what you love most about the new Qlik Learning!
Starting March 11th, 2025, Slack will enforce changes in their APIs affecting file uploads. To accommodate these breaking changes, we have introduced new blocks in the Slack connector for Qlik Application Automation.
What blocks are affected?
The Send Binary File block will be deprecated. Instead, use the Upload File to Channel block to upload binary files. If you still want to send a base64 encoded string, use the Send Text Based File block and configure the encoding parameter to base64.
The Upload File To Channel block and Send Text Based File block need to be updated to a new version. To perform this update, replace existing blocks with new blocks by dragging the blocks from the block library.
Any automation using affected blocks needs to be updated.
See Breaking changes for file support in the Slack connector: new blocks introduced for steps and details.
Thank you for choosing Qlik,
Qlik Support
Indian IT hiring landscape is at a pivotal juncture as it transitions from a year of decline towards a more hopeful future. The focus on specialised skills, particularly in AI and data science, combined with geographical shifts towards Tier 2 cities, indicates a transformation within the sector. While the IT hiring landscape in India in 2024 was marked by delayed onboarding and a decline in overall hiring activity, the outlook for 2025 appears promising with expectations of recovery and growth fuelled by improvements in economic conditions and technological advancements.
Visualizations & Dashboards
Navigation Enhancements
In July 2024 we introduced some major navigation enhancements across the platform, which are inclusive of both new features and existing enhancements. All of these components combined now allow for a more intuitive and fluid experience for everyone. The below bulleted items are the positively impacted areas by this release:
Our teams have been hard at work making robust accessible assets and resources to cover this topic in depth. For more information, whether high-level or the nitty-gritty, please check out the following:
Pivot Table Improvements
This is one of those times that we condone messing with a classic such as the Pivot Table, especially when you improve it with features just as classic, you know? Check out the new additions below!:
Cyclic Dimensions Improvements (based off customer feedback!)
Straight Table - Enhancements
Improvements to Selection Bar
Combo Chart Updates
Tab Container Changes + Bundled Charts
Data Prep
Improved UX for Script Editing
New Functionality for Search & Replace
New Ability for Autocomplete Hints
*Important Notice*
1) Attention Android Mobile Users
If you are using an Android mobile device to access Qlik Sense through the mobile app, please do not upgrade to the November 2024 release just yet. The Android mobile client requires additional updates that weren’t ready in time for this release.
Important Clarification -> Android users can still access Qlik Sense via a mobile web browser without any issues. This limitation only affects the Qlik Sense mobile app on Android.
This update does not impact:
Only customers using Android mobile devices via the Qlik Sense mobile app are advised to delay upgrading until we release a patch to address this.
We apologize for any inconvenience this may cause and appreciate your understanding. Our team is working diligently to complete the necessary updates, and we will notify you as soon as the patch is available.
2) Add-on Upgrade Requirements
View Support Updates for details on add-ons that must be upgraded, if you upgrade to Qlik Sense Enterprise on Windows November 2024.
Thank you for your continued support! For questions or assistance, please reach out to our support team.
Qlik Answers transforms unstructured data into clear, AI-powered insights. Today, I'll show you how to integrate Qlik Answers directly into your web app using the newly released Knowledgebases API and Assistants API.
In this blog, we'll build a custom Football chat assistant from scratch powered by Qlik Answers.
We’ll leverage the Assistants API to power real-time Q&A while the knowledge base is already set up in Qlik Sense.
For those of you who prefer a ready-made solution, you can quickly embed the native Qlik Answers UI using qlik-embed:
<qlik-embed
ui="ai/assistant"
assistant-id="<assistant-id>"
></qlik-embed>
You can explore the ai/assistant parameters (and other UIs available in qlik-embed) on qlik.dev, or take a look at some of my previous blog posts here and here.
For full documentation on the Knowledgebases API and Assistants API, visit qlik.dev/apis/rest/assistants/ and qlik.dev/apis/rest/knowledgebases/.
Let’s dive in and see how you can take control of your Qlik Answers UI experience!
Before we start building our DIY solution, here’s a quick refresher:
Knowledgebases: Collections of individual data sources (like HTML, DOCX, TXT, PDFs) that power your Qlik Answers. (In our case, we built the KB in Qlik Sense!)
Assistants: The chat interface that interacts with users using retrieval-augmented generation (RAG). With generative AI in the mix, Qlik Answers delivers reliable, linked answers that help drive decision-making.
Step 1: Get your data ready
Since we already created our knowledge base directly in Qlik Sense, we skip the Knowledgebases API. If you’d like to build one from scratch, check out the knowledgebases API documentation.
Step 2: Configure your assistant
With your knowledge base set, you create your assistant using the Assistants API. This is where the magic happens: you can manage conversation starters, customize follow-ups, and more. Visit the assistants API docs on qlik.dev. to learn more
Step 3: Build Your Custom UI
Now, let’s look at our custom chat UI code. We'll built a simple football-themed chat interface that lets users ask questions related to the NFL. The assistant’s answers stream in seamlessly to the interface.
HTML:
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Football Assistant</title>
<link rel="stylesheet" href="styles.css" />
</head>
<body>
<div class="chat-container">
<div class="chat-header">
<h4>Let's talk Football</h4>
<span class="header-span">You ask, Qlik answers.</span>
</div>
<div class="chat-body" id="chat-body">
<div class="message assistant">
<div class="bubble">
<p>Hey there, champ! Ask me anything.</p>
</div>
</div>
</div>
<div class="chat-footer">
<input
type="text"
id="chat-input"
placeholder="Type your Football related question..."
/>
<button id="send-btn">Send</button>
</div>
</div>
<script src="scripts.js"></script>
</body>
</html>
Frontend JS:
document.addEventListener("DOMContentLoaded", () => {
const chatBody = document.getElementById("chat-body");
const chatInput = document.getElementById("chat-input");
const sendButton = document.getElementById("send-btn");
// Append a user message immediately
function appendUserMessage(message) {
const messageDiv = document.createElement("div");
messageDiv.classList.add("message", "user");
const bubbleDiv = document.createElement("div");
bubbleDiv.classList.add("bubble");
bubbleDiv.innerHTML = `<p>${message}</p>`;
messageDiv.appendChild(bubbleDiv);
chatBody.appendChild(messageDiv);
chatBody.scrollTop = chatBody.scrollHeight;
}
// Create an assistant bubble that we update with streaming text
function createAssistantBubble() {
const messageDiv = document.createElement("div");
messageDiv.classList.add("message", "assistant");
const bubbleDiv = document.createElement("div");
bubbleDiv.classList.add("bubble");
bubbleDiv.innerHTML = "<p></p>";
messageDiv.appendChild(bubbleDiv);
chatBody.appendChild(messageDiv);
chatBody.scrollTop = chatBody.scrollHeight;
return bubbleDiv.querySelector("p");
}
// Send the question to the backend and stream the answer
function sendQuestion() {
const question = chatInput.value.trim();
if (!question) return;
// Append the user's message
appendUserMessage(question);
chatInput.value = "";
// Create an assistant bubble for the answer
const assistantTextElement = createAssistantBubble();
// Open a connection to stream the answer
const eventSource = new EventSource(
`/stream-answers?question=${encodeURIComponent(question)}`
);
eventSource.onmessage = function (event) {
if (event.data === "[DONE]") {
eventSource.close();
} else {
assistantTextElement.innerHTML += event.data;
chatBody.scrollTop = chatBody.scrollHeight;
}
};
eventSource.onerror = function (event) {
console.error("EventSource error:", event);
eventSource.close();
assistantTextElement.innerHTML += " [Error receiving stream]";
};
}
sendButton.addEventListener("click", sendQuestion);
chatInput.addEventListener("keydown", (event) => {
if (event.key === "Enter") {
event.preventDefault();
sendQuestion();
}
});
});
Backend node.js script:
import express from "express";
import fetch from "node-fetch";
import path from "path";
import { fileURLToPath } from "url";
// Setup __dirname for ES modules
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
// Define port and initialize Express app
const PORT = process.env.PORT || 3000;
const app = express();
app.use(express.static("public"));
app.use(express.json());
// Serve the frontend
app.get("/", (req, res) => {
res.sendFile(path.join(__dirname, "public", "index.html"));
});
// Endpoint to stream Qlik Answers output
app.get("/stream-answers", async (req, res) => {
const question = req.query.question;
if (!question) {
res.status(400).send("No question provided");
return;
}
// Set headers for streaming response
res.writeHead(200, {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
Connection: "keep-alive",
});
const assistantId = "b82ae7a9-9911-4830-a4f3-f433e88496d2";
const baseUrl = "https://sense-demo.us.qlikcloud.com/api/v1/assistants/";
const bearerToken = process.env["apiKey"];
try {
// Create a new conversation thread
const createThreadUrl = `${baseUrl}${assistantId}/threads`;
const threadResponse = await fetch(createThreadUrl, {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${bearerToken}`,
},
body: JSON.stringify({
name: `Conversation for question: ${question}`,
}),
});
if (!threadResponse.ok) {
const errorData = await threadResponse.text();
res.write(`data: ${JSON.stringify({ error: errorData })}\n\n`);
res.end();
return;
}
const threadData = await threadResponse.json();
const threadId = threadData.id;
// Invoke the Qlik Answers streaming endpoint
const streamUrl = `${baseUrl}${assistantId}/threads/${threadId}/actions/stream`;
const invokeResponse = await fetch(streamUrl, {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${bearerToken}`,
},
body: JSON.stringify({
input: {
prompt: question,
promptType: "thread",
includeText: true,
},
}),
});
if (!invokeResponse.ok) {
const errorData = await invokeResponse.text();
res.write(`data: ${JSON.stringify({ error: errorData })}\n\n`);
res.end();
return;
}
// Process and stream the response text
const decoder = new TextDecoder();
for await (const chunk of invokeResponse.body) {
let textChunk = decoder.decode(chunk);
let parts = textChunk.split(/(?<=\})(?=\{)/);
for (const part of parts) {
let trimmedPart = part.trim();
if (!trimmedPart) continue;
try {
const parsed = JSON.parse(trimmedPart);
if (parsed.output && parsed.output.trim() !== "") {
res.write(`data: ${parsed.output}\n\n`);
}
} catch (e) {
if (trimmedPart && !trimmedPart.startsWith('{"sources"')) {
res.write(`data: ${trimmedPart}\n\n`);
}
}
}
}
res.write("data: [DONE]\n\n");
res.end();
} catch (error) {
res.write(`data: ${JSON.stringify({ error: error.message })}\n\n`);
res.end();
}
});
// Start the backend server
app.listen(PORT, () => {
console.log(`Backend running on port ${PORT}`);
});
Okay, that was a lot of code! Let’s break it down into bite-sized pieces so you can see exactly how our custom Qlik Answers chat interface works.
1. The HTML
Our index.html creates a custom chat UI. It sets up:
A chat footer with an input field and a send button for users to type their questions.
2. The Frontend JavaScript (scripts.js)
This script handles the user interaction:
Appending messages: When you type a question and hit send (or press Enter), your message is added to the chat window.
Creating chat bubbles: It creates separate message bubbles for you (the user) and the assistant.
Streaming the answer: It opens a connection to our backend so that as soon as the assistant’s response is ready, it streams into the assistant’s bubble. This gives you a live, real-time feel without any manual “typing” effect.
3. The Node.js Backend (index.js)
Our backend does the heavy lifting:
Creating a conversation thread: It uses the Assistants API to start a new thread for each question.
Invoking the streaming endpoint: It then sends your question to Qlik Answers and streams the response back.
Processing the stream: As chunks of text come in, the backend cleans them up—splitting any concatenated JSON and only sending the useful text to the frontend.
Closing the stream: Once the complete answer is sent, it signals the end so your chat bubble doesn’t wait indefinitely.
4. How It All Connects
When you send a question:
Your message is displayed immediately in your custom chat bubble.
The backend creates a thread and requests an answer from Qlik Answers.
The response is streamed back to your UI in real time, making it look like the assistant is typing out the answer as it arrives.
P.S: this is just a simple example to introduce you to the new Answers APIs and show you how to get started using them, you'll need to double check limitations and adhere to best practices when using the APIs in a production environment.
You can find the full code here:
https://replit.com/@ouadielimouni/QA-Test-APIs#public/index.html
Happy coding - and, Go Birds 🦅!
I’m thrilled to write this installment of Qlik’s innovation blog because the new Qlik Talend Cloud features I’ve chosen to highlight are two of the capabilities I’ve been testing over the past few weeks. So, without any further ado let's dive into these exciting new capabilities!
Since it’s inception, Qlik Talend Cloud pipelines have offered straightforward design metaphor. Often, you’d create a pipeline for a single data source that continually landed, merged and transformed data changes into a single target, such as a cloud data warehouse or lake. As time progressed the ability to add multiple data sources to a pipeline was introduced, and dedicated replication tasks with multiple targets followed a short time later.
Qlik Talend Cloud Data Pipelines
However, many customers gave feedback that they’d like pipelines to be more modular, especially as projects became bigger and more complex. Modularity would not only increase component reusability, but also enable pipelines to be segregated by business domain. In addition, pipeline development would be more flexible while adhering to the best data-design practices.
Well, I’m happy to announce that “Cross Project Pipelines” are now generally available in all tenants. You can split complex pipelines consisting of multiple ingestion and transformation tasks into components that can be reused by other projects providing greater design flexibility and simplified pipeline management. In addition, Cross Project Pipelines can be segregated by data domain to encourage Business Domain Data Product or Data Mesh design principles.Cross Project Pipeline
At the end of 2024, we released an AI processor that allowed you call native Databricks AI functions in a Transformation Flow without the need to hand code SQL. Databricks AI functions are a set of built-in SQL functions that allow you to apply AI directly to your data within SQL queries. This means you can use powerful AI models for tasks like sentiment analysis, text generation, and more, all from your Qlik Talend Cloud pipelines. If you can’t remember that far back then checkout this Qlik community blog post “Inject AI into your Databricks Qlik Talend Cloud Pipeline”
While many of our Databricks customers were overjoyed, the Snowflake proponents felt very left out, regularly commenting that Snowflake Cortex offered similar features too. Those comments were frequently followed by the question of “When will Qlik’s AI processor support Snowflake too?” Once again, I’m happy to say we’ve listened, and now the AI processor also supports Snowflake Cortex AI functions as well! The details of how to use Snowflake Cortex go beyond the scope of this blog post but stay tuned because a detailed article and demo of this feature will be published shortly. Until then, look at the screenshot below to see the AI processor in action and follow the link for more information about Snowflake Cortex LLM functions.
Transformation Flow and AI Processor
Well there you have it. Two great new features that expand the usefulness and uses of Qlik Talend Cloud, but it doesn’t stop there. If you’re curious about what other innovations, enhancements, and improvements are coming to the Qlik platform in 2025 then join our Qlik Insider Webinar - Roadmap Edition that’s taking place on February 26th. Follow this link and register today!
Hi everyone,
Want to stay a step ahead of important Qlik support issues? Then sign up for our monthly webinar series where you can get first-hand insights from Qlik experts.
Thursday, February 27 Qlik will host another Techspert Talks session and this time we are looking at Advanced Qlik Sense System Monitoring.
But wait, what is it exactly?
Techspert Talks is a free webinar held on a monthly basis, where you can hear directly from Qlik Techsperts on topics that are relevant to Customers and Partners today.
In this session we will cover:
Choose the webinar time that's best for you.
The webinar is hosted using ON24 in English and will last 30 minutes plus time for Q&A.
Hope to see you there!!
Several years ago, I blogged about how creating a synthetic dimension using ValueList allowed us to color dimensions in a chart. ValueList is commonly used where there is not a dimension in the data model to use, thus creating a synthetic one with ValueList. You can read more about ValueList in myprevious blog post. In this blog post, I am going to share how I used ValueList to handle omitted dimension values in a chart.
I recently ran into a scenario when creating visualizations based on survey data. In the survey, the participant was asked for their age as well as their age group. The ages were grouped into the following buckets:
Once I loaded the data, I realized that there were not participants for all the age groups, so my chart looked like the bar chart below. There was a bar and value for only the age groups that the participants fit in.
While I could leave the chart like this, I wanted to show all the age group buckets in the chart so that it was evident that there were no participants (0%) in the other age group buckets. In this example, the four age groups were consecutive, so it did not look odd to leave the chart as is but imagine if there were no participants in the 45-54 age bucket. The chart may look odd with the gap between 44 and 55.
I explored various ways to handle this. One way was to add rows to the respective table for the missing age group. This worked fine but I was not a fan of adding rows to the survey table that were not related to a specific participant. The option that I settled on was using ValueList to add the omitted age groups. While this option works well, it can lead to lengthy expressions for the measures. In this example, there were only seven age group buckets so it was manageable but if you had many dimensions values then it may not be the best option.
To update the bar chart using ValueList, I changed the dimension from
To
Then I changed the measure from
To
Using ValueList in the dimension created a synthetic dimension with each age group option that was included in the survey. Now I will see all the age buckets in the chart even if there were no participants that fell in the age group bucket. Since I am using ValueList for the dimension, I need to update the measure to use it as well. This is where a single line measure can become a lengthier measure because I need to create a measure for every value in the synthetic dimension, thus the nested if statement above. The result looks like this:
There are no gaps in the age buckets, and we can see all the age bucket options that were presented in the survey. I prefer this chart over the first bar chart I shared because I have a better understanding of the survey responses presented to the participants as well as the response they provided. I would be interested in hearing how others have handled similar scenarios.
Thanks,
Jennell
Packaging: Qlik Talend Cloud brings together Qlik and Talend’s best of breed capabilities related to Data Integration, Quality and Governance into 4 simple use-case centric editions. These editions are designed to simplify the process of choosing the right solution and help customers focus on using the solution quickly to drive business results. With its broad range of best-in-class capabilities, Qlik Talend Cloud supports customer scenarios across every level of technical maturity; from ingestion of data in batches from SaaS applications into a cloud warehouse, to developing sophisticated data products with robust governance and everything in between. Depending on your business and technological needs at the moment, you can choose an edition today and smoothly transition to more advanced editions as needed over time without disruptions.
Pricing Model: A key facet of Qlik Talend Cloud’s development was the introduction of a usage capacity-based pricing model for all capabilities within the Qlik Talend Cloud portfolio. This pricing model enables organizations to more tightly align their investment in Qlik to the value that they realize in the solution. Capacity bands have been defined to provide a specific level of usage capacity for each edition. Customers can start with an initial capacity commitment and as they ramp up the use of the solution, flexibly add more capacity bands to meet their business needs. There also is a structured pricing incentive for higher levels of capacity commitment to support deployments at scale.
Pricing Metrics: To help customers plan their capacity commitment, Qlik Talend Cloud uses two simple types of capacity bands. One for Data movement and basic transformation, and another for more sophisticated data integration, quality needs. Lets look at them one by one.
Data Moved: This is a measure of total volume of data moved (in GB) in a given month.
Job Executions: This is a measure of the total number of times each job (Artifact ID) is executed in a given month. (this metric is relevant for advanced data integration, quality needs)
Job Duration: This is a measure of total time taken (in hours) for executions of all jobs in a given month. (this metric is relevant for advanced data integration, quality needs)
Self-service usage Dashboards: In order to enable customers to analyze their usage, Qlik provides an intuitive and interactive self-service usage dashboard that provides granular insights into usage trends and underlying drivers. Customers frequently use this information for internal cost-allocation across different divisions or departments.
Estimating capacity needs: Please reach out to your Qlik account team who can work with you to understand the workload use inhouse capacity estimation tools to estimate your capacity needs for Data movement as well as Job executions and duration.
See below for a short video that summarizes the Qlik Talend Cloud packaging and pricing model. More details can be found on our pricing page here.
In today’s data-driven world, trust in data isn’t just important—it’s essential. Organizations depend on high-quality data to drive informed decisions, fuel innovation, and maintain a competitive edge. But data quality isn’t one-size-fits-all. In customer service, a missing address might be acceptable if the primary contact method is valid. However, an incomplete or incorrect address can lead to payment failures and operational inefficiencies in billing and invoicing.
To evaluate trust in data, organizations need a metric-driven, objective measure that can be tuned to meet their specific definitions of data quality. A flexible and transparent approach ensures organizations can adapt trust assessments to their unique operational data quality needs.
Qlik Trust Score in Qlik Talend Cloud evaluates a dataset’s trustworthiness by aggregating various data quality dimensions. It provides a holistic view to help organizations identify gaps and prioritize improvements, with a numeric score (ranging from 0 to 5) for a quick assessment of dataset reliability.
Here are the key dimensions used to evaluate dataset trustworthiness, along with examples:
Overview of Qlik Trust Score for the Shipping Route Dataset from Snowflake
By evaluating datasets across multiple dimensions, Qlik Trust Score provides organizations with a clear, actionable view of data quality and reliability. To optimize performance and flexibility, it supports two processing methods. Pushdown processing, available exclusively for Snowflake datasets, triggers quality computations directly within Snowflake. This approach ensures efficient, in-data warehouse processing without data movement. Meanwhile, pull-up processing, available for all datasets, enables quality computations within Qlik Talend Cloud, enabling broader data quality assessments without relying on external processing resources.
Key Benefits of Qlik Trust Score
Tunable dimension weights to align with organizational specific data quality priorities
Conclusion
Qlik Trust Score is more than just a metric—it’s a powerful tool for building confidence and enhancing data trust. With customizable scoring, organizations can tailor data quality dimensions to align with their data priorities, focusing on the factors that matter most.
Available in Qlik Talend Cloud Enterprise Edition, Qlik Trust Score delivers robust, reliable data quality insights. For more details, visit the Qlik Trust Score documentation.
Hello Qlik Admins and Developers,
The next major Qlik Sense Enterprise on Windows release is scheduled for November 2024. The update will introduce changes that will have an impact on the following add-ons:
The changes affecting the add-ons are:
New versions of all affected add-ons were made available before or in November of 2024.
Please plan your upgrade accordingly to prevent interruptions:
If you upgrade to Qlik Sense Enterprise on Windows November 2024, all listed add-ons must be upgraded as well.
Thank you for choosing Qlik,
Qlik Support
Imagine a world where data isn’t just numbers but a powerful tool for innovation. That’s exactly what DHBW Mannheim, the largest university in Baden-Württemberg, is achieving with the Qlik Academic Program!
In its Digital Commerce Management course, students don’t just learn about data—they experience it firsthand. Teaching data without hands-on practice? That’s like learning to drive without ever hitting the road!
By integrating Qlik Sense, students gain real-world skills in data management, visualization, and analysis, preparing them for a data-driven workforce. They tackle real datasets, build dynamic dashboards, and explore how data drives decisions in retail and services.
Industries are hungry for data-literate graduates who can analyze trends, optimize strategies, and innovate. Thanks to the Qlik Academic Program, DHBW students gain valuable qualifications that give them a competitive edge.
With Qlik Solutions Architect Lukas Lohmann’s support, students dive into hands-on projects—from streaming service comparisons to market trend analysis—gaining confidence in their data skills.
📖 Read the full success story here:
👉 German
👉 English
🌟 Explore the Qlik Academic Program and how we’re transforming education: Qlik Academic Program
📩 Questions? Feel free to reach out at eliz.cayirli@qlik.com
Stay tuned for exciting updates!
As the needs for data management evolve rapidly and the demand for large-scale processing increases, Qlik takes a bold step forward with the release of its groundbreaking product: Dynamic Engine.
This processing engine, designed to integrate natively with Kubernetes (K8s), redefines the data processing architecture, offering a unified, scalable, and future-ready solution.
In this blog, we will explore the key features of Dynamic Engine and what sets it apart from existing processing solutions within the Qlik Talend Data Fabric.
Dynamic Engine introduces a unified processing platform that adapts seamlessly to any workload, whether deployed in on-premise, hybrid, or SaaS environments. This flexibility makes it the solution of choice for enterprises looking to migrate or manage their data pipelines across diverse infrastructure setups.
Here are the key advantages of Dynamic Engine:
Dynamic Engine enables businesses to orchestrate data integration tasks on customer-controlled infrastructure while benefiting from cloud-managed services. But what sets it apart from traditional engines like Talend’s Remote Engine?
Dynamic Engine simplifies the process of keeping your environment up to date with the latest versions. With a built-in version upgrade mechanism, users will receive notifications via the TMC whenever an update is available. The versioning system ensures consistency between the Dynamic Engine and its related Dynamic Engine Environments, which makes managing upgrades across different environments a straightforward process.
This mechanism allows for easier updates, ensuring that users can always benefit from new features, security patches, and performance improvements without the manual effort typically required in version management.
Smooth Migration from Remote Engine
For those currently using Qlik Talend’s Remote Engine, the transition to Dynamic Engine is made seamless through TMC’s promotion-based migration path. This migration is designed to be as frictionless as possible, leveraging existing APIs and known workflows.
With a few steps, users can promote their existing Remote Engine setups to Dynamic Engine configurations, preserving the familiarity of the existing environment while taking advantage of the added flexibility and cloud-native capabilities of Dynamic Engine.
Scalability via Run Profiles
Dynamic Engine’s ability to scale dynamically is one of its strongest features. Using TMC’s Run Profiles, organizations can define how their data tasks are distributed across resources.
This level of customization provides businesses with the flexibility to optimize their resources, improve performance, and reduce costs—all directly managed through TMC.
Compatibility with Leading Cloud Providers and On-Prem Infrastructure
Dynamic Engine is designed to work across various cloud and on-prem infrastructures, making it a versatile choice for enterprises. It is currently compatible with:
In the near future, compatibility will extend to Google GKE (Google Kubernetes Engine) and OpenShift, ensuring that Dynamic Engine can meet the needs of organizations across different platforms. This flexibility allows businesses to maintain a hybrid approach to cloud and on-prem infrastructure, aligning with their specific requirements.
Historically, Talend’s solutions relied on Remote Engines to execute jobs outside of the Cloud. These engines allowed enterprises to maintain control over their infrastructure while utilizing local processing capabilities. However, as scalability and flexibility demands grew, these engines faced some limitations. Dynamic Engine, on the other hand, positions itself as a modern and automated solution.
Remote Engine | Dynamic Engine |
Limited Scalability: Remote Engines were constrained by the capacity of the machines they ran on. For large workloads, the infrastructure had to be manually adjusted constantly. |
Automatic scalability powered by Kubernetes. |
Fragmented Data Flows: Although highly performant, Remote Engines required specific configurations for each type of processing, leading to fragmented workflows. |
More fluid orchestration of data flows, centrally controlled via the Talend Management Console (TMC). |
Manual Environment Management: Each Remote Engine required a high level of manual management for scaling and resource optimization. |
Optimized resource utilization through Kubernetes pods, which can be easily provisioned and managed dynamically. |
Dynamic Engine is designed to be deployed seamlessly in various environments. Here’s a closer look at its operation:
Full documentation can be found here.
With Dynamic Engine, Qlik offers a solution that not only addresses today’s challenges of large-scale data processing but also sets the standard for future data management needs. Whether enterprises are looking to scale their processing capacity, unify their data workflows, or automate environment management, Dynamic Engine stands out as the solution of choice.
Together with Talend Data Fabric, Dynamic Engine creates a complete ecosystem that transforms how data is integrated, processed, and leveraged across the organization.
Set Analysis
Organization
Administrator
Finanças
Static, read-only dashboards are a thing of the past compared to what's possible now in Qlik Cloud.
‘Write back’ solutions offer the ability to input data or update dimensions in source systems, such as databases or CRMs, all while staying within a Qlik Sense app.
The solution incorporates both Qlik Cloud and Application Automation to enable users to input data from a dashboard or application and run the appropriate data refresh across the source system as well as the analytics.
Example Use Cases:
This new feature is possible with all of the connectors located in Application Automation, including:
Below you can see technical diagram based around using Application Automation for a write back solution.
The ability to write back in Qlik Cloud is a game changer for customers who want to operationalize their existing Qlik Sense applications to enhance decision making right inside an app where the analytics live. This not only streamlines business processes across an ever-growing data landscape, but it also enables users to to act in the moment. With Application Automation powering the write back executions, customers can unlock more value across their data and analytics environment.
To learn more for a more ‘hands-on tutorial’ please see video here.