Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Forums for Qlik Data Integration solutions. Ask questions, join discussions, find solutions, and access documentation and resources
Qlik Gallery is meant to encourage Qlikkies everywhere to share their progress – from a first Qlik app – to a favorite Qlik app – and everything in-between.
Get started on Qlik Community, find How-To documents, and join general non-product related discussions.
Direct links to other resources within the Qlik ecosystem. We suggest you bookmark this page.
Qlik gives qualified university students, educators, and researchers free Qlik software and resources to prepare students for the data-driven workplace.
When organizations began using data lakes about a decade ago, many discovered a significant issue. Although the technology excelled at storing large volumes of raw data, it lacked the ability for business teams to access and consume the data easily. This often resulted in increased complexity, governance issues, and added management burdens instead of simplifying data access. Qlik recognized this challenge and acquired Podium Data to solve it.
Podium Data—later rebranded as Qlik Data Catalyst and more recently Qlik Catalog—pioneered the vision to solve the data lake challenge. Qlik Catalog allowed enterprises to transform the passive data lake into a self-service data resource that efficiently managed data processes, reduced data prep time, and delivered data faster to business users.
Today, that vision is both elevated and expanded within Qlik Talend Cloud®. Many of Podium Data’s original capabilities have been reimagined and offered in a cloud-native, AI-ready data management platform that enables organizations to transition from raw data to trusted data more quickly than ever. Some of the Qlik Talend Cloud features are as follows:
In addition, advanced capabilities like field-level lineage, impact analysis, semantic typing, and the Qlik Trust Score help Qlik Talend Cloud to broaden use cases significantly - transforming the cataloging of data assets into a full-spectrum data management platform for data discovery, trust, governance, and automation.
Therefore, Qlik is officially retiring the original Qlik Catalog as part of this evolution.
Effective April 24, 2025, Qlik Catalog will no longer be available for purchase. Existing Qlik Catalog subscriptions may be available to renew for a prorated subscription period with an end date prior to May 2026, at Qlik’s discretion. Additionally, Support for Qlik Catalog will conclude on May 11, 2026.
This planned transition reflects Qlik’s commitment to simplifying and modernizing the data experience with a unified data management platform — Qlik Talend Cloud — built for the demands of today and future data management use-cases.
For any questions or support during this transition, please contact your Qlik representative.
Set analysis is a way to define an aggregation scope different from current selection. Think of it as a way to define a conditional aggregation. The condition – or filter – is written inside the aggregation function. For example, the following will sum the amounts pertaining to 2021:
Sum({<Year={2021}>} Amount)
This syntax however has a couple of drawbacks: First, it is not easy to combine a master measure with different set expressions, since the set expression is hard-coded inside the master measure. Secondly, if you have an expression with multiple aggregations, you need to write the same set expression in every aggregation function.
Therefore, we introduce an additional position for set expressions: They can now be written outside the aggregation function and will then affect all subsequent aggregations. This means that the below expression is allowed:
{<Year={2021}>} Sum(Amount) / Count(distinct Customer)
For master measures, this change will allow a very powerful re-usability: You can now add set expressions to tweak existing master measures:
{<Year={2021}>} [Master Measure]
The outer set expression will affect the entire expression, unless it is enclosed in round brackets. If so, the brackets define the lexical scope. For example, in the following expression, the set expression will only affect the aggregations inside the brackets - the Avg() call will not be affected.
( {<Year={2021}>} Sum(Amount) / Count(distinct Customer) ) – Avg(CustomerSales)
The set expression must be placed in the beginning of the lexical scope.
Aggregation functions that lack set expression, will inherit the context from the outside: In earlier versions the context was always defined by the current selection. Now we have added the possibility of having the context defined by a set expression. So, now “context” means current selection or an outer set expression.
If an aggregation function already contains a set expression, this will be merged with the context. The same merging rules as today will apply:
Examples:
{<OuterSet>} Sum( {<InnerSet>} Field )
The OuterSet will be inherited into the InnerSet, since the inner set lacks set identifier.
{<OuterSet>} Sum( {$<InnerSet>} Field )
The OuterSet will not be inherited into the InnerSet, since the inner set expression contains a set identifier.
The set expression of the outer aggregation will never be inherited into the inner aggregation. But a set expression outside the outer aggregation will be inherited into both.
Examples:
Sum({<Set1>} Aggr(Count({<Set2>} Field )))
The Set1 will not be inherited into Set2.
{<OuterSet>} Sum({<Set1>} Aggr(Count({<Set2>} Field )))
The OuterSet will be inherited into both Set1 and Set2.
Nothing changes for existing set expressions – they will continue to work. But with this additional syntax we hope to simplify your work and your expressions and allow you to re-use your master measures more effectively.
This change affects all Qlik Sense editions from the August 2022 release. It will also be included in the next major QlikView release, planned for late spring 2023.
See more on
https://community.qlik.com/t5/Qlik-Design-Blog/A-Primer-on-Set-Analysis/ba-p/1468344
HIC
Custom CSS has been a popular workaround in Qlik Sense for years, helping developers tweak layouts, hide buttons, and get around styling limitations. But things are shifting. With the Multi-KPI object being deprecated and native styling options getting stronger with every release, it’s a good time to rethink how we approach custom styling in Qlik Sense moving forward.
In this post, we’ll break down:
Let’s dive in!
Why is custom CSS used in Qlik Sense?
In the past, Qlik’s built-in styling options were limited. That led to many developers using CSS to:
Most of this was made possible by either creating custom themes, building extensions, or using the Multi-KPI object as a helper to inject CSS code. But as powerful as these techniques were, they also came with downsides, like breakage after updates or difficulty governing app behavior at scale.
So, What’s Changing?
The biggest shift is the deprecation of the Multi-KPI object, which has served as a popular CSS injection tool. Here's what you need to know:
EOL of the Multi-KPI object is May 2026:
If you’ve been using the Multi-KPI as a styling workaround, it’s time to plan for alternatives.
Native Styling Has Come a Long Way
Before reaching for CSS, it's worth exploring what Qlik now offers natively. Many of the styling tweaks that once required CSS are now fully supported in the product UI.
Here’s a quick look at recent additions:
|
Native styling available now or coming in the next update |
Straight Table |
Background images, word wrap, mini charts, zebra striping, null styling, header toggle |
Pivot Table |
Indentation mode, expand/collapse, RTL support, cyclic dimensions |
Text Object |
Bullet lists, hover toggle, border control, support for up to 100 measures |
Line Chart |
Point and line annotations |
Scatter Plot |
Reference lines with slope, customizable outline color and width |
Layout Container |
Object resizing and custom tooltips |
Navigation Menu |
Sheet title expressions, left/right panel toggle, divider control |
And this list keeps growing. If you're building new apps or redesigning old ones, these built-in features will cover a huge percentage of use cases.
Many deprecated CSS tricks are now native. Check out the full Obsolete CSS Modifications post for examples and native replacements.
What About Themes?
Themes are not going anywhere. In fact, they remain the most robust and supported way to apply consistent styling across your app portfolio.
With custom themes, you can:
You can still include CSS files in themes, but remember:
If you're new to themes, Qlik.dev has a great guide to get started, or checkout my previous blog post for some tips and tricks.
Still Need Custom CSS? Here’s What You Can Do
If your use case goes beyond what native styling or themes can handle—like hiding a specific button, or styling based on object IDs—you still have a few options:
What's Missing
A lot of Qlik users have voiced the same thing: "we still need an officially supported way to inject CSS at the sheet or app level"
Some have suggested:
Qlik has acknowledged this feedback and hinted that future solutions are being considered.
What You Should Do Today
That’s a wrap on this post. With more native styling features on the way, I’ll be keeping an eye out and will be likely sharing a follow-up as things evolve. If you're in the middle of refactoring or exploring new approaches, stay tuned, there’s more to come.
Today, Qlik Talend Cloud (QTC) offers an end-to-end enterprise-grade solution that delivers rapid time to insight and agility for Snowflake users. Qlik’s solution for Snowflake users automates the ingestion, design, implementation, and updates of data warehouses and lakehouses while minimizing the manual, error-prone design processes of data modeling, ETL coding, and scripting.
As a result, customers can speed up their analytics and AI initiatives, achieve greater agility, and reduce risk — all while fully realizing the instant elasticity and cost advantages of Snowflake’s cloud data platform.
Now, as organizations continue to scale their data operations, modern architectures like Iceberg-based open lakehouses are emerging as the go-to solution for flexibility, performance, and cost efficiency. To support this evolution, Qlik Talend Cloud Pipelines introduces two new powerful capabilities designed to simplify and enhance the process of building open lakehouses with Snowflake: Lake landing for Snowflake and support for Snowflake-managed Iceberg tables.
A key challenge for customers in cloud data management is balancing rapid data ingestion with optimized compute resources in Snowflake. Qlik Talend Cloud’s new lake-landing ingestion feature for Snowflake addresses this by allowing users to land their data into a cloud-object store first, before consuming it in Snowflake. With this, customers can replicate data from diverse sources into a cloud storage of their choice (Amazon S3, Azure Data Lake Storage, or Google Cloud Storage) with low latency and high fidelity, instead of ingesting data directly into Snowflake’s storage layer. Ingestion into cloud storage is fully managed by Qlik and doesn’t require the use of Snowflake compute.
In addition, Qlik Talend Cloud allows you to configure the frequency at which Snowflake will pick up the data from the cloud storage: While you can replicate source data changes in real-time to a cloud object store, the Snowflake storage task can read and apply those changes at a slower pace (could be once every hour or once every 12 hours for example).
For ingestion use-cases where low latency replication into Snowflake is not a requirement this reduces Snowflake warehouse uptime requirements and ultimately optimizes costs.
In addition to lake-landing ingestion, Qlik Talend Cloud Pipelines now supports Snowflake-managed Iceberg tables. This new feature allows Qlik Talend Cloud pipeline tasks (Storage, Transform, and Data Mart) to ingest and store data directly into Iceberg tables utilizing external cloud storage (S3, ADLS, or GCS). Those externally stored Iceberg tables are fully managed by Snowflake, meaning they benefit from Snowflake performance optimizations and table lifecycle maintenance. Moreover, this new feature is fully integrated with Snowflake’s Open Iceberg Catalog (based on Apache Polaris) to ensure full interoperability with any Iceberg compatible query engine.
These two capabilities described above can be used independently or in combination, offering greater flexibility in how data is ingested, stored, and queried.
Below is a diagram showing simple implementation of both of these capabilities together.
It features a pipeline built on Qlik Talend Cloud, composed of 3 successive tasks (lake-landing, storage and transform) that takes care of:
Here is a video shows how to create the above example pipeline:
With these new capabilities, Qlik Talend Cloud empowers data teams to build Iceberg-based open lakehouses with Snowflake in a more efficient, scalable, and cost-effective manner. Whether optimizing for low-latency ingestion or ensuring seamless interoperability, these enhancements bring significant advantages to modern data architectures. Some of the key benefits of these enhancements include:
Ready to take advantage of these new capabilities? Explore how Qlik Talend Cloud can help your organization build next-generation open lakehouses with Snowflake.
I feel pleased to welcome Dr. Manikandan Sundaram as the Qlik Academic Program Educator Ambassador for 2025.
Manikandan is currently the Dean of the School of Computing and Head of the Data Science and Analytics Centre at Rathinam Technical Campus, Coimbatore, India.
He is an experienced professional with a demonstrated history of working in the education management industry. With over 12 years of experience in both the industry and academia, he has held various senior roles, including Software Engineer, Technical Head, and Database Administrator. Manikandan earned his Master’s Degree in Information Technology Engineering from SRM University, Chennai and later his PhD from Anna University, Chennai.
Throughout his career, Dr. Manikandan has received numerous awards for his contributions, including the SIFE-12 award from SIFE Organization and Coder 2k06. His expertise spans multiple fields such as Database Management Systems, Data Analytics, Data Science, Machine intelligence, Networks, and Ethical Hacking.
During his career as a Professor, Manikandan has introduced the Qlik Academic Program to his students and encouraged them to pursue qualifications and certifications under this program. He has organized datathons and hackathons in his capacity as a Professor.
We look forward to working with Manikandan this year and hear from him his insights on data analytics.
To know more about the Qlik Academic Educator Ambassador program, visit: https://www.qlik.com/us/company/academic-program/ambassadors
To know more about the Qlik Academic Program, visit: qlik.com/academicprogram
The OEM Dashboard is an application for Qlik Cloud designed for OEM partners to centrally monitor data across their customers’ tenants. It provides a single pane to review numerous dimensions and measures, compare trends, and quickly spot issues across many different areas—which would otherwise be a tedious and manual process.
This application includes data from the App Analyzer, Entitlement Analyzer, and the Reload Analyzer, all of which are other monitoring applications for Qlik Cloud that provide deep levels of detail on their respective areas. Together, a complete picture can be formed which is crucial to the successful management of an OEM environment.
For reference, here's a brief blog post and video which refers to this application and describes Qlik’s differentiating multi-tenant approach: https://www.qlik.com/blog/extending-the-power-of-qlik-sense-saas-for-oem-partners
Use Case:
While this application was built first and foremost for Qlik's OEM partners, it can also be used for direct customers that have multiple Qlik Cloud Tenants, e.g., global deployments or tiered deployments.
Items to note:
The links on this page include the OEM Dashboard application and configuration guide. Additionally, you'll find the Console Settings Collector application which is an optional data source for the OEM Dashboard described in the guide.
The applications and coinciding references are available via GitHub, linked below:
Any issues or enhancement requests should be opened on the Issues page within the app’s GitHub repository.
Thank you for choosing Qlik!
Qlik Platform Architects
Additional Resources:
Our other monitoring apps for Qlik Cloud can be found below.
Equipping the Next Generation of Analysts: Alexander Flaig on Teaching with Qlik
When Alexander Flaig set out to design his Business Analytics course in 2023, his goal was simple but ambitious: give students hands-on experience with tools they’d actually use in the real world. That’s why he made Qlik a cornerstone of the curriculum from day one.
Fast forward to today, and Qlik has become more than just a platform in the classroom—it’s a launchpad for careers.
From the Classroom to the Interview Room
The impact of integrating Qlik into coursework has been immediate and powerful. “Yes, some of my students have landed internships and jobs thanks to their Qlik knowledge,” Alexander shares. “One student was even interrupted during a job interview because they knew more about Qlik than expected.”
These moments underscore what happens when education stays aligned with industry trends: students graduate with skills that employers are actively seeking.
Certifications, Real Projects, and Industry Collaboration
Alexander’s course doesn’t stop at theory. Students earn certifications like Qlik’s Business Analyst and Data Architect credentials, which help them stand out in a competitive job market. The capstone project involves building interactive dashboards using real-world company data—last year, students collaborated with Drake Analytics, creating professional-grade visualizations that speak volumes about their capabilities.
“Check out one of the dashboards from last year’s class—it’s a testament to how powerful learning becomes when it's rooted in real-world application.”
![]()
Looking Ahead: 2025 and Beyond
For Alexander, 2025 is all about continuing momentum. “My goal is to keep offering certifications and to integrate more AI-driven analytics into the course,” he says. With the rapid evolution of AI and data tools, this forward-thinking approach ensures students stay at the cutting edge.
The Future of Higher Ed: Adapt or Fall Behind
The course has grown from 20 to 80 students in just a year—a clear sign that demand for practical, industry-relevant education is surging. “Many students say it’s one of the best courses they’ve ever taken,” Flaig adds. “Higher education is in the middle of a transformation. Institutions must adapt to this new reality—or risk becoming obsolete.”
Beyond the Classroom
When he’s not teaching, Alexander can be found researching and speaking about AI strategies and emerging technologies. He brings that same future-focused energy to his classroom, ensuring students not only understand the tools of today but are prepared for the challenges of tomorrow.
On Being a Qlik Educator Ambassador
As an Educator Ambassador, Alexander hopes to deepen his understanding of the Qlik ecosystem and expand its footprint in education. “I want to gain deeper insights into the Qlik toolbox and use my role to spread awareness about the benefits of Qlik in academic settings,” he says.
With his passion for teaching, commitment to real-world learning, and drive to innovate, Alexander Flaig is not just teaching analytics—he’s redefining what business education can be.
To learn more about the Qlik Academic Program and how to access free Qlik Sense software and training resources, visit qlik.com/academicprogram
We are thrilled to introduce Juana Zuntini as a Qlik Educator Ambassador for 2025! Juana’s journey with Qlik began in 2017, fueled by her passion for teaching and her desire to empower her students with real-world skills. She believes in her students’ potential and is dedicated to helping them succeed, not just in school but in life.
“I chose Qlik because it makes data come alive,” Juana shares. “I wanted my students to experience the power of data visualization and learn how to turn data into stories that matter.” Since then, she has watched her students grow in confidence, think critically, and become data storytellers.
Juana’s classroom is a place of exploration and creativity. She uses databases from the Qlik Learning Portal and all the platform’s resources to make learning hands-on and exciting. Her students don’t just study data, they experiment, build, and learn by doing, gaining skills that make them stand out in the job market.
Juana is proud of how Qlik has inspired her students. “Many of them decide to learn more about data analytics after graduation because they see the impact Qlik can make,” she says. She’s not just teaching skills; she’s sparking curiosity and inspiring a lifelong love of learning.
In 2025, Juana is excited to take her teaching even further. “We’re strengthening Qlik training, enhancing data visualization, and introducing Generative Artificial Intelligence (GAI) to support data analysis,” she explains. She wants her students to be ready for the future, equipped with the skills they need to lead and innovate.
But for Juana, teaching isn’t just about lessons and exams. It’s about building a community. Every year, she organizes a Data Visualization seminar using Qlik, open to students, researchers, and professionals. These events are filled with curiosity, conversations, and connections, and plenty of inspiration.
Juana is passionate about preparing her students for life beyond the classroom. “Companies need more than data analysts; they need leaders who can make decisions with confidence. My goal is to prepare my students to be those leaders”.
As a first-time Qlik Educator Ambassador, Juana is excited to connect with other educators, learn from experts, and bring the latest in Qlik and data education to her students.
Welcome to the Qlik family, Juana! Your passion for teaching and belief in your students are truly inspiring. We can’t wait to see how you’ll change lives, one data story at a time.
Join the Qlik Academic Program and kick-start your data journey with free access to Qlik Sense software, training, and certifications. Be part of a global community of future data leaders! Visit qlik.com/academicprogram to get started.
Qlik Cloud will undergo a scheduled system upgrade impacting Automations during the month of April 2025, to improve the continued stability and performance of our platform. Reloads, reports and other workloads referenced by automations may also be impacted during the maintenance window.
This table lists all affected regions and their expected maintenance window. The times are listed in Central European Summer time and the local time relevant to the tenant.
Region | Maintenance Window (CEST) | Local time |
Europe (Stockholm, Anonymous Access) |
April 16 3 PM – 4 PM CEST |
April 16 3 PM – 4 PM CEST |
Middle East (UAE) |
April 16 4 PM – 5 PM CEST |
April 16 6 PM – 7 PM GST |
Asia Pacific (Mumbai) |
April 17 3 PM – 4 PM CEST |
April 17 6.30 PM - 7.30 PM IST |
Asia Pacific (Tokyo) |
April 22 7 PM – 8 PM CEST |
April 23 2 AM – 3 AM JST |
Europe (London) |
April 23 5 AM – 6 AM CEST |
April 23 4 AM – 5 AM BST |
US East (N. Virginia) |
April 23 6 AM – 7 AM CEST |
April 23 12 AM – 1 AM EDT |
Asia Pacific (Singapore) |
April 23 7 PM – 8 PM CEST |
April 24 1 AM – 2 AM SGT |
Asia Pacific (Sydney) |
April 23 8 PM – 9 PM CEST |
April 24 4 AM – 5 AM AEST |
Europe (Frankfurt) |
April 24 5 AM – 6 AM CEST |
April 24 5 AM – 6 AM CEST |
Europe (Ireland) |
April 24 6 AM – 7 AM CEST |
April 24 5 AM – 6 AM IST |
During the scheduled maintenance window, all automations features will be impacted. In addition to the specific impact on automations as specified below, this will also impact reloads, reports and other workloads referenced by automations.
We apologize for any inconvenience this may cause and appreciate your understanding as we work to improve the stability and performance of our platform.
To track updates during the scheduled maintenance, please visit the Qlik Cloud Status page. If you encounter unexpected complications during or after the maintenance, contact Qlik Support using live chat. We will be happy to assist you.
Thank you for choosing Qlik,
Qlik Support
The Qlik Enterprise Manager connector for Qlik Application Automation is being effectively discontinued on April 28th of 2025.
If you are looking for an alternative to the Qlik Enterprise Manager connector, Qlik offers generic connectors that can be used in conjunction with the Qlik Enterprise Manager API.
For more information, see:
If you have any questions, do not hesitate to contact us through the Qlik Customer Portal.
Thank you for choosing Qlik,
Qlik Support
Cidados
Cidados
Dados abertos do Governo de Goiás
Dados abertos do Governo de Goiás
After a costly home remodel that included an addition requiring efficient cooling, I was faced with an expensive repair bill of $8,000 due to a leak in the evaporator and a newly discovered incompatible outdoor unit. Conflicting information from the contractor and the service company led the speaker to conduct independent research, culminating in a call to the AC manufacturer’s technical support for accurate specifications.
Despite receiving voluminous documentation on installation practices from the manufacturer, the information was convoluted. Frustrated, I was curious how Qlik Answers could analyze these technical documents effectively.
Watch and see how I won with Qlik Answers!
This video serves not only as my own personal account but also as an educational guide on navigating information complexities and leveraging technological tools to gain critical insights for effective resolution and cost management.
Watch more examples:
Marcin is a Research Assistant at the Faculty of Economic Sciences and Management at Nicolaus Copernicus University in Toruń, Poland. With a strong academic background in economics, management, data analysis, and data science, his teaching philosophy centers on bridging theory with practice. Over the years, he has transformed the way data analytics is taught in his department—shifting away from traditional lectures and toward a more interactive, applied learning experience.
His journey with Qlik began with a desire to better engage students in the world of data visualization and analytics. Since then, Marcin has consistently leveraged Qlik Sense and the program’s extensive resources to bring data to life in the classroom. “Using Qlik allows my students to explore real datasets, work on hands-on projects, and develop critical thinking and analytical skills that employers are looking for,” Marcin explains.
Over the past year, he has enriched his curriculum with new methodologies, including group-based assignments, real-world datasets, and project-based learning modules. These practical components not only help students understand key analytical concepts but also offer them tangible experience with modern tools used in the workforce. He also integrates the Qlik Data Literacy Program and Qlik Sense Qualifications into his teaching, providing students with structured pathways to certify their skills. “The qualifications help students validate their knowledge and show potential employers that they’re ready to work with data from day one,” he adds.
The results speak for themselves. Many of Marcin’s students have successfully landed internships and jobs where their Qlik experience played a decisive role. “The feedback from students and recruiters alike confirms the value of these tools. They’re not just learning concepts; they’re gaining career-relevant experience,” he shares.
Looking ahead to 2025, Marcin is eager to further evolve his teaching by incorporating advanced analytics topics such as machine learning, streaming data, API integration, and more complex visualization techniques. “The world of analytics is moving fast, and it’s critical that education keeps up. My goal is to prepare students for what’s next, not just what’s now,” he says.
Outside the classroom, Marcin enjoys a well-balanced life in a community that values education and innovation. Whether he’s engaging in professional development, spending time with his family, or exploring new hobbies, he lives the spirit of curiosity and continuous learning.
As a returning Qlik Educator Ambassador, Marcin sees his role as a valuable opportunity to connect with like-minded educators and make a broader impact. “It’s incredibly rewarding to be part of a network that’s shaping the future of education and empowering students across the globe,” he notes.
We’re thrilled to have Marcin on board for another year and can’t wait to see the continued impact of his work on students, colleagues, and the wider education community.
To learn more about the Qlik Academic Program and how to access free Qlik Sense software and training resources, visit qlik.com/academicprogram
Identificação dos colaboradores com maior contribuição para o faturamento. Detecção de uma queda significativa no volume e no faturamento em relação ao ano anterior. Compreensão de quais categorias de produtos são mais lucrativas, com destaque para combustíveis. A análise geográfica revelou uma concentração de vendas em determinadas regiões do Brasil, auxiliando na definição de estratégias mais direcionadas.
O aplicativo trouxe maior visibilidade sobre o desempenho operacional e financeiro do negócio, permitindo uma tomada de decisão mais ágil e embasada. Também contribuiu para identificar áreas e colaboradores com baixa performance, possibilitando ações corretivas mais rápidas.
O aplicativo é utilizado principalmente por gerentes operacionais, supervisores de pista e loja, além da equipe administrativa e diretoria da empresa. Ele é acessado diariamente para acompanhamento das vendas, desempenho por funcionário e análise financeira. A visualização é feita tanto em computadores nas salas de gestão quanto em dashboards projetados em TVs no escritório, sendo uma ferramenta essencial para o controle das metas e decisões rápidas.
O app integra dados de diferentes fontes (ERP, vendas e estoque) e aplica análises de performance, tendências e variações ano a ano. Isso possibilita a identificação de padrões, projeções de faturamento e apoio na definição de estratégias de marketing e operação.
Multiple log-based replication issues may affect Qlik Replicate customers using SAP HANA DB 2.0 who are upgrading to the SAP HANA service packs SPS7 and SPS8.
SAP HANA DB 2.0 SPS7 (Service Pack 7):
RECOB-9379 and RECOB-9427 have been addressed by Qlik. An early build (Qlik Replicate 2024.11, SP03 Early Build) is available.
Download the early build from: https://files.qlik.com/url/wucx4x2nbyytwseu (password: pk2pfzup)
No other issues in Service Pack 7 are known.
SAP HANA DB 2.0 SPS8 (Service Pack 8):
Customers planning to upgrade to SPS7 or SPS8 should be aware of the risk, particularly with the changes to Hana logs affecting the Hana log parsing with respective to Qlik Replicate. We strongly advise postponing any upgrades to these versions until Qlik R&D has reviewed and certified these service packs.
Qlik has not received any reports of customers using trigger-based replication experiencing the same issues. However, if an upgrade is planned, we recommend thoroughly testing in lower environments before scheduling any production upgrades.
Thank you for choosing Qlik,
Qlik Support
1) Explore candidate performance & popular vote trends in depth — at national, state, and county levels.
2) Identify how key swing geographies influenced outcomes.
3) Examine the effects of third-party & independent candidates — and see how Daenerys Targaryen, Donald Duck, Frank Underwood, Harrison Ford, and others fared on real ballots.
Streamlines the exploration of complex election data, enabling faster insights and deeper analyses.
Political analysts, journalists, researchers, educators, and anyone interested in uncovering trends and patterns in U.S. presidential voting.
Built on data from MIT & Harvard, it uses Map, Decomp Tree with AI Splits, Circular Gauge, Bar Chart, Treemap, and KPI visualizations — unlocking insights at every level.
分析については下記の機能が更新されました。
ストレートテーブルに下記の機能が追加されました。
折れ線チャートに手動でポイントと線を追加することができます。
棒チャートにバタフライ形式オプションが追加されました。
マルチ KPI は、廃止され新規作成ができなくなっております。また、すでにご使用いただいているマルチKPIチャートは引き続き使用可能ですが、2025年11月に完全に廃止予定です。
Direct Access gateway 1.7.2
チャンクの回復期間のしきい値 (分単位) を設定できるようになりました。回復期間のしきい値に達した時点でリロードが再開されない場合、適切なメッセージが表示されて失敗します。このオプションは、長時間の回復後に、3 時間の制限を超える可能性があるリロードに役立ちます。
データ統合については下記の機能が更新されました。
Talend Data Catalog アプリケーションに関するアップデート
Talend Data Catalogブリッジに関するアップデート
詳細については、「新機能と改良点」を参照してください。
There have been some data load editor improvements that I think are worth mentioning so in this blog post I will cover some of the new features in the data load editor that I have found useful. The first, and my favorite new feature, is the table preview. The second is the ability to do a limited load and load a specified number of rows in each table. The third feature I will cover is the ability to view the script history, as well as the option to save, download and restore previous versions. Let’s look at each of these in more detail.
When building an app, my preference is to use the load data editor to load my data. With table preview, I can view loaded data tables at the bottom of the data load editor after data has been loaded or previewed in an app.
This is my favorite new feature because nine times out of ten, I want to view the data I loaded to ensure it loaded as expected and to check that my logic is correct. Having the preview table right there in the data load editor, saves me from having to go somewhere else, like the data model viewer or a sheet, to view the loaded data. I can use the preview table to check that they have the desired results. The ability to do this quick check saves me time.
As a developer, I can select the table to preview, and the data can be viewed as a table, as seen above, or as a list or grid as seen in the images below. When previewing the data as a table, the preview table can be expanded to show more rows, columns in the table can be widened and there is pagination that allows me to move around in the table. There is also an option to view the output of the load. This will show the same info you see in the load data window when the app is reloading.
List View
Grid View
The second feature in the data load editor I find useful is the preview data option. This provides an easy way for me to load some, but not all, of the data when reloading. In the screenshot below, the default of 100 rows is entered. This will load a max of 100 rows in each table. This value can be edited if desired. By default, the use store command is toggled off. When this is off, store commands in the script are ignored preventing potentially incomplete data from being exported. This feature is helpful when I want to just profile the data and see what the data looks like. It can also be helpful when there is a lot of data to be loaded and I do not need to load it all to check that the script is working as expected. Again, this is another time saver because I can limit the load thus the time it takes for the app to reload in a single step. I find this helpful when I want to quickly test a change in the script but do not want to wait for the entire app to reload.
The last data load feature I am going to cover is history for scripts. This new capability allows me to create versions of the script, name and rename scripts, restore the script from a previous version, download the load script or delete a version of the script.
I have not used the history feature much, but I can see it being helpful when I want to name various versions of the script. Every time the script is edited, it is saved to the current version. At any time, I can save that current version giving it a meaningful name. Maybe I want to make some changes to the script but want to have a backup in case it does not work. This can be done now right in the data load editor. I also have an easy way to restore a previous version, if necessary. Once a version is named, it can be renamed, restored, or deleted. All script versions can be downloaded as a QVS file. One thing to note is that the history only saves scripts created in the data loaded editor.
Hopefully, you find these new data load editor features helpful. They are available now in your tenant. Just check out the data load editor in your app.
Thanks,
Jennell
Hello, Qlik Replicate admins,
Beginning in Q3 2025, Snowflake will mandate Multifactor Authentication (MFA). For detailed information and a timetable, see FAQ: Snowflake Will Block Single-Factor Password Authentication by November 2025.
Unless MFA has been set up, this change will impact connectivity to Qlik Replicate.
To mitigate the impact, switch to Key Pair Authentication. Key Pair Authentication is available by default starting with Qlik Replicate 2024.05.
For more information, see Setting general connection parameters.
If an upgrade is currently not feasible, review How to setup Key Pair Authentication in Snowflake and How to configure this enhanced security mechanism in Qlik Replicate for a possible workaround to apply Key Pair Authentication.
If you have any questions, we're happy to assist. Reply to this blog post or take similar queries to the Qlik Replicate forum.
Thank you for choosing Qlik,
Qlik Support
The quality and trustworthiness of data significantly influence how and whether it is used, impacting various aspects of a business, including:
For example, a retail company found that its loyalty program failed to identify duplicate customer records, resulting in a 30% increase in marketing costs due to duplication. This also led to customer frustration, as they received conflicting information and promotions.
Maintaining high standards for data quality and trust is crucial for building a reliable and successful business. By ensuring this data is made available through a data marketplace, organizations can offer consumers a trusted, single version of the truth.
Accurate, high-quality data is essential for making informed decisions. When data is inaccurate or unreliable, it can lead to poor decision-making, damaging the credibility and success of both the data product** and the data marketplace.
For instance, a supply chain company suffered significant losses due to outdated and inconsistent inventory data. This resulted in accepting orders that couldn't be fulfilled on time, leading to contract breaches, financial penalties, and a loss of customer trust, ultimately costing the company millions in long-term contracts.
The value of data products in a marketplace is directly linked to their quality levels for a domain-centric use-case. High-quality data products are more likely to deliver value and meet consumer needs, strengthening the marketplace’s reputation.
**A data product is like a cake, created from ingredients like raw data, algorithms, and analytics. The result is a fully "baked" product, ready to deliver value—whether as insights, dashboards, or predictions. Unlike raw data, it’s complete and easy to use.
Consumers need to trust that the data they are purchasing or accessing is accurate, reliable, and suitable for their needs. Without trust, consumption decreases and there is no incentive for the data producers to produce these data products, hindering overall data marketplace growth. Sellers need to ensure that their data products are of high quality to build a positive reputation. Consistent delivery of high-quality data helps establish credibility and fosters long-term relationships with buyers.
A best-in-class organization achieves consistent delivery of high-quality data products by combining clear objectives, strong data governance, and robust development processes. They define standards for data quality, adopt agile methodologies, and utilize scalable technology to ensure reliability and adaptability. Multidisciplinary teams collaborate to build products tailored to user needs, while continuous feedback and performance monitoring drive improvement. By fostering a culture of quality and investing in skilled teams and modern tools, they create data products that are accurate, user-friendly, and impactful.
As an example, an e-commerce platform allowed sellers to post product reviews without verifying their authenticity, leading to a loss of customer trust due to fabricated or exaggerated ratings. This resulted in declining sales, reputational damage, and increased regulatory scrutiny, forcing the company to make significant investments to overhaul its systems, policies, and practices.
A company would need to invest in review verification tools, identity checks, and moderation systems to ensure authenticity. It must implement stricter policies, comply with regulations, and educate users on ethical practices. Additional efforts include hiring moderators, rebuilding trust with PR campaigns, and using analytics to improve review quality. These investments aim to restore trust and reputation.
The following examples and use cases explore the business impact of data quality and trust.
As shown in the example above, poor quality and untrusted data can lead to regulatory scrutiny. Adherence to data quality standards and regulations (such as GDPR, CCPA) is necessary to ensure compliance. Non-compliance can lead to legal issues, fines, and damage to the marketplace's reputation.
For example, a pharmaceutical company in a highly regulated market faced severe consequences due to poor data management. Inaccurate and incomplete reporting led to hefty fines, delayed drug approvals, and temporary production halts, causing significant revenue loss and allowing competitors to gain an edge. The company also suffered reputational damage and was placed under stricter regulatory scrutiny, forcing costly investments in compliance and data governance systems.
Ensuring data quality often involves implementing robust data security measures to protect sensitive information and maintain data integrity.
Customers expect data products to meet certain quality standards. If data products are unreliable or inaccurate, customer satisfaction will decrease, leading to negative reviews and potential loss of business.
At a financial services firm, employees were dissatisfied with the data quality available through the new analytics platform, which was inconsistent and inaccurate. This led to increased manual work, reduced productivity, and low adoption of the platform, causing delays in decision-making, frustrated clients, and higher turnover rates. The company faced significant costs in recruiting and training new staff to replace those who left.
High-quality data products reduce the need for extensive support and troubleshooting, leading to better overall customer experiences.
High-quality data allows for more accurate matching between buyers and sellers, enhancing the efficiency of transactions in the marketplace.
As an example, an online marketplace for data products used advanced algorithms to accurately match buyers with relevant sellers, significantly improving efficiency. Buyers saved time by being directed to the most appropriate data products, while sellers increased transaction volume by reaching the right customers quickly. The matching system encouraged high-quality data offerings, reduced transaction costs, and facilitated better decision-making for buyers, ultimately leading to increased business outcomes for both parties.
Quality data is easier to integrate and use across different systems and platforms, increasing the utility and effectiveness of data products built from it.
A high-quality and trusted data marketplace generates competitive advantages for data product consumers by providing access to reliable, accurate, and up-to-date data that enhances decision-making, innovation, and operational efficiency.
As an example, a retail company that invested in high-quality, trusted customer data gained a significant competitive advantage by offering personalized shopping experiences. By leveraging accurate insights into customer preferences and behaviors, the company was able to tailor promotions, recommend products, and optimize inventory in real-time, leading to increased customer satisfaction and loyalty. By feeding this high-quality, curated data into an AI/ML model, the business was able to closely predict trends, respond faster to dynamic market demands, and outpace competitors who were using less reliable data, ultimately driving higher sales and market share.
Reliable data enables innovation and the development of new data products and services, driving growth and advancement.
Ensuring data quality supports the scalability of the marketplace by maintaining consistency and reliability as the volume of transactions and participants grows.
As an example, in a large multinational corporation, different departments acted as both buyers and sellers of data. For example, the marketing team sold customer insights to the product development team, while finance shared budget and performance data with both marketing and operations. This internal data marketplace improved collaboration, ensured efficient data sharing, and enabled faster, data-driven decision-making, leading to optimized resources and a more agile business model.
A focus on quality helps to build a strong, trustworthy brand, which is essential for attracting new users and expanding the marketplace.
Data quality and trust are foundational to the success of any data product marketplace. By ensuring high-quality, reliable data, businesses can enhance decision-making, maintain compliance, improve customer satisfaction, and gain a competitive edge. As the marketplace scales, maintaining data integrity ensures continued growth and long-term success.