Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Business

Announcements
Qlik and ServiceNow Partner to Bring Trusted Enterprise Context into AI-Powered Workflows. Learn More!
cancel
Showing results for 
Search instead for 
Did you mean: 

Analytics & AI

Forums for Qlik Analytic solutions. Ask questions, join discussions, find solutions, and access documentation and resources.

Data Integration & Quality

Forums for Qlik Data Integration solutions. Ask questions, join discussions, find solutions, and access documentation and resources

Explore Qlik Gallery

Qlik Gallery is meant to encourage Qlikkies everywhere to share their progress – from a first Qlik app – to a favorite Qlik app – and everything in-between.

Support

Chat with us, search Knowledge, open a Qlik or Talend Case, read the latest Updates Blog, find Release Notes, and learn about our Programs.

Events

Learn about upcoming Qlik related events, webinars and local meetups.

Groups

Join a Group that is right for you and get more out of your collaborations. Some groups are closed. Closed Groups require approval to view and participate.

Qlik Community

Get started on Qlik Community, find How-To documents, and join general non-product related discussions.

Blogs

This space offers a variety of blogs, all written by Qlik employees. Product and non product related.

Qlik Resources

Direct links to other resources within the Qlik ecosystem. We suggest you bookmark this page.

Qlik Academic Program

Qlik gives qualified university students, educators, and researchers free Qlik software and resources to prepare students for the data-driven workplace.

Community Sitemap

Here you will find a list of all the Qlik Community forums.

Recent Blog Posts

  • Image Not found
    blog

    Support Updates

    Qlik Automate: Execution tokens will become header parameters on February 1st, 2...

    Hello Qlik Users, As announced previously (see Qlik Automate execution token changes), execution tokens will become header parameters on February 1st,... Show More

    Hello Qlik Users,

    As announced previously (see Qlik Automate execution token changes), execution tokens will become header parameters on February 1st, 2026.

    When triggering a triggered automation through the trigger URL (see the endpoint below), the execution token must be sent as a header parameter. Currently, it is possible to send the execution token as a query parameter. Starting February 1st, 2026, sending execution tokens as header parameters will be enforced.

    api/v1/automations/{id}/actions/execute

     

    Don't hesitate to reach out if you have any questions or address our experts directly in the Qlik Automate forum.

     

    Thank you for choosing Qlik,
    Qlik Support

    Show Less
  • Image Not found
    blog

    Support Updates

    Upcoming Maintenance for Talend Cloud and Talend Management Console: March, Apri...

    Update March 4th, 2026: added link to How to get Talend Management Console task schedules and pause and resume during a maintenance window using the A... Show More

    Update March 4th, 2026: added link to How to get Talend Management Console task schedules and pause and resume during a maintenance window using the API article
    Updated April 24th, 2026: added impact on APIs (all down) and additional clarification on why tasks must be stopped and the impact on remote engines
    Updated May 7th, 2026: added additional information on how to address Remote Engine impact
    Updated May 12th, 2026: the anticipated impact for the remaining maintenance window has increased from 30 minutes to 90 minutes

    Talend Cloud and Talend Management Console will undergo scheduled maintenance in MarchApril, and MayThis infrastructure modernization is a key step in unifying the Talend ecosystem with Qlik.

    The alignment paves the way for a more seamless experience across both platforms. Over the coming months, you will gain access to integrated features that bridge data integration and analytics, enabling unified governance and a streamlined management experience across your entire data lifecycle.

    The maintenance windows will occur per region, during off-peak hours, and are expected to have a maximum of 90 minutes of effective downtime. 

     

    What is the expected impact?

    A full outage of Talend Cloud and Talend Management Console for a duration of up to 90 minutes within a preplanned 4-hour window.

    The following applications will not be accessible: 

    • Talend Management Console (TMC)
    • Talend Data Stewardship (TDS)
    • Talend Data Preparation (TDP)
    • Talend Data Inventory (TDC)
    • Talend Pipeline Designer (TPD)
    • Talend API Designer and Tester
    • Talend Studio
    • Talend Cloud Engines

    All APIs for Talend Cloud will not be available during the outage. APIs impacted:

     

    In detail:

    • Cloud engines will not be available, and executions running on Cloud Engines will be terminated.
    • Talend Studio users may be disconnected from their session, and it will not be possible to open a new Talend Studio session except in local mode.
    • Executions that are already in progress during the outage will terminate correctly except on cloud engines, but all tasks or plans scheduled to start during those periods will be skipped.
    • Skipped executions will not be tagged as failed, since they were never started. For this reason, check the execution status of your tasks and plans to ensure that all important ones are not skipped, or start them manually if necessary. See What do I need to do to prepare? further down in this blog post.
    • Static IP addresses for Cloud Engines corresponding to Disaster Recovery regions will change during maintenance. See What follow-up actions are required? further down in this blog post.

     

    What do I need to do to prepare?

     

    What follow-up actions are required?

    • After the maintenance window, check and monitor the execution status of tasks and plans, as well as the status of your Remote Engines.
      In some instances, Remote Engines might require a restart if marked as unavailable in the Talend Management Console or if tasks cannot be executed as expected.

      If restarting the Remote Engine does not resolve the complication, follow the pairing instructions in Pairing Remote Engines using a dedicated web service to reset the key and re-pair the Remote Engine.

      If your Remote Engine Gen2 is unavailable or cannot execute tasks, then:

      1. Upgrade your Remote Engine Gen2 to the latest 2026-04 release: Updating the Remote Engine Gen2 when installed using the execution script)
      2. And re-establish the pairing: Re-establish the pairing of Remote Engine Gen2

    • If you use a predefined static IP on Cloud Engine, you will need to allow the new Disaster Recovery Region's IP addresses, which will have changed at this point. While this does not immediately affect production, it will impact any potential Disaster Recovery process.

      After the maintenance window, check your static IPs (Disaster Recovery) as documented in Using predefined static IP addresses for execution containers and update your firewalls accordingly.

      No change is required for the active region's IP addresses. They will be migrated and will work as of today, ensuring no production interruption.

     

    When will the maintenance take place?

    Each region will undergo maintenance for 4 hours during off-peak hours, with a maximum of 90 minutes of effective downtime.

     

    Region Maintenance Start Maintenance End
    Talend Cloud - AWS - Asia Pacific (Sydney)

    au.cloud.talend.com


    Wednesday 25 March 2026
     
    22:00 AEDT (Sydney)

    UTC: 25/03/26 - 11:00


    Thursday 26 March 2026
     
    02:00 AEDT (Sydney)

    UTC: 25/03/26 - 15:00

    Talend Cloud - AWS - Asia Pacific (Tokyo)

    ap.cloud.talend.com


    Monday 20 April
    2026 
    22:00 JST (Tokyo)

    UTC: 20/04/26 - 13:00


    Tuesday 21 April
    2026 
    02:00 JST (Tokyo)

    UTC: 20/04/26 - 17:00

    Talend Cloud - AWS - US East (N. Virginia)

    us.cloud.talend.com


    Monday 27 April 2026
     
    02:00 EDT

    UTC: 27/04/26 - 6:00


    Monday 27 April
    2026 
    06:00 EDT

    UTC: 27/04/26 - 10:00

    Talend Cloud - AWS - Europe (Frankfurt)

    eu.cloud.talend.com


    Tuesday 26 May
    2026 
    21:00 CEST

    UTC: 26/05/26 - 19:00


    Wednesday 27 May
    2026 
    01:00 CEST

    UTC: 26/05/26 - 23:00

    To identify which region your tenant is affected by, cross-reference Accessing Talend Cloud applications.

    To track further updates during the scheduled Qlik Cloud Maintenance, please visit our Qlik Cloud Status page. This blog post will be updated with additional information where necessary. 

     

    Thank you for choosing Qlik,
    Qlik Support

    Show Less
  • Image Not found
    blog

    Japan

    【BookLive / Gakken / 日清食品のユーザー事例講演が決定!】AI Reality Tour Tokyo 2026

    6/10 (水)開催 AI Reality Tour Tokyo 2026BookLive / Gakken / 日清食品のユーザー事例講演が決定! 来たる 6/10(水)、「AI Reality Tour Tokyo 2026」を開催いたします。 本イベントでは、Qlik のエキスパートによる基調... Show More

    6/10 (水)開催 AI Reality Tour Tokyo 2026

    BookLive / Gakken / 日清食品のユーザー事例講演が決定!

    来たる 6/10(水)、「AI Reality Tour Tokyo 2026」を開催いたします。

    本イベントでは、Qlik のエキスパートによる基調講演、Qlik ユーザーの先進的な事例、Qlik 技術部門による最新の製品情報、Qlik のパートナー企業による最新のソリューションや展示ブースなど、AI がもたらす価値と現実とのギャップを解消し、AI を実現・加速・適応する最先端のソリューションをご紹介します。

    詳しい講演概要・お申し込みはこちら

    •  Qlik の AI ビジョンと戦略 - 企業全体の AI 活用を成功に導く実践的な手法
    •  最新のエージェンティック - 今すぐ活用できる最新のイノベーションが切り拓く新たな可能性
    •  先進的な顧客事例 - Qlik を活用してビジネス変革を実現している企業の事例
    •  新たなつながり - 新たな知見をもたらす Qlik のエキスパートや同業他社との交流

    AIRT2026_Blog_v2.jpg

    お申し込みの締め切りは、6月 2日(火)17:00 までです。お早めにお申し込みください。

    【開催概要】

    日時:
    2026年 6月 10日(水)13:00 - 18:30(受付開始 12:00)
                                                     懇親会 18:30 - 19:30
    会場:
    有明セントラルタワーホール&カンファレンス
              東京都江東区有明3-7-18 有明セントラルタワー3F・4F

    参加費:無料
    お問い合わせMarketingjp@qlik.com までお問い合わせください。

    今すぐ申し込む

    Show Less
  • qlik-nontechnicalblogs.jpg
    blog

    Explore Qlik Gallery

    Decomposition Tree with AI Splits

    Decomposition Tree with AI Splits AnyChart Users drill into any metric across any dimensions, in any order — and find what drives the numbers. A... Show More

    🔗 >> EXPLORE THIS APP LIVE OR DOWNLOAD .QVF <<

    🔗 >> SEE MORE DEMO APPS <<

    Show Less
  • Image Not found
    blog

    Qlik Academic Program

    New Centers of Excellence in India

    This year, in April, we inaugurated two new Centers of Excellence ( CoE) under the Qlik Academic Program, in the Silicon Valley of India, i.e Bangalor... Show More

    This year, in April, we inaugurated two new Centers of Excellence ( CoE) under the Qlik Academic Program, in the Silicon Valley of India, i.e Bangalore. The new CoEs mark a new beginning for training and skilling students in Qlik technologies along with other activities like datathons.

    Reva University is one of the leading Universities in the State of Karnataka and is ranked among the top universities in the region. The School of Computer Science and Engineering took the lead in this initiative and has established the CoE. Strategic Partner of the Qlik Academic Program, ICT Academy established the connection with Reva and ensured that arrangements were made as per the requirements of the CoE. 

    reva pic.JPG

     

    The second CoE was established in Sai Vidya Institute of Technology ( SVIT) which is a well known institution for engineering students. Many students have earned their degree qualification from here. The Department of Computer Science Engineering have been coordinating to establish the CoE. Many initiatives are planned by SVIT to take this engagement ahead. 

    pic1.jpg

     

    The previous CoEs are functioning successfully in VJIT Hyderabad, Anurag University Hyderabad and Kristu Jayanti University Bangalore. Many students have got trained and qualified from the CoEs here. Along with this, they have hosted various events including datathons successfully. 

    We hope to establish more CoEs this year and create a physical space for students to get trained under the Qlik Academic Program.

    To learn more about the academic program, please visit: qlik.com/academicprogram 

    Show Less
  • Image Not found
    blog

    Product Innovation

    May the Data Flow: Qlik Replicate's May 2026 Technical Preview is Live

    It's May — and just like a certain galaxy far, far away, things are heating up. The Qlik Replicate May 2026 Technical Preview has landed, and it's rea... Show More

    It's May — and just like a certain galaxy far, far away, things are heating up. The Qlik Replicate May 2026 Technical Preview has landed, and it's ready for you to put through its paces.

    For those of you who live and breathe data replication, this is your moment to get ahead of the curve before general availability. The Technical Preview is available to download now

    Adam_Mayer_0-1778658338617.png

     [Select Product Category: Qlik Data Integration, Product: Qlik Replicate, Release Number: Technical Preview]

     

    Now let's get into what's new.

    What's in the May 2026 Technical Preview?

    This release brings a couple of notable additions worth paying attention to:

    • IMS – R&D Monitored and Fabric Mirror – Improvements are now available, actively tracked by our R&D team as we refine the experience.
    • SAP Sybase ASE – Preview: making its debut in this Technical Preview, we are pleased to give you an early look at the improvements we've made for SAP Sybase ASE support.

     

    As always with a Technical Preview, the clue is in the name — this is your opportunity to explore, test, and feed back before the full release. Think of it as the dress rehearsal, not opening night.

    Before You Upgrade — A Quick But Important Note

    Please take a few minutes to review the known issues before proceeding with any upgrade. No one enjoys a surprise mid-pipeline, even in test environments.

     

    The full documentation and release notes for both Qlik Replicate and Qlik Enterprise Manager are available here:

    The docs are your friend here — treat them as such.

     

    Get Involved

    Technical Previews are only as good as the people who test them. If you hit something unexpected or spot something worth improving, drop your feedback in the comments below or raise it through the Community. Your input directly shapes what ships.

    May the data flow — and may your upgrades go smoothly. Happy testing.

    Show Less
  • Image Not found
    blog

    Support Updates

    Qlik Talend Administration Center - Security Patches Available

    The following two Qlik Talend Administration Center security issues have been identified and subsequently resolved. Patches are already available.   U... Show More

    The following two Qlik Talend Administration Center security issues have been identified and subsequently resolved. Patches are already available.

     

    URL access control vulnerability (CVE-2026-pending)

    A broken access control issue has been identified in Qlik Talend Administration Center, which allows a user with View permission to modify the Qlik Talend Studio update URL.

    Affected Software 

    • All versions of Qlik Talend Administration Center before Patch_20251121_QTAC-1471_R2025-11_v1-8.0.1.

    See Security fix for Qlik Talend Administration Center URL access control vulnerability (CVE-2026-pending) for details. 

     

    Cross-site scripting vulnerability (CVE-2026-pending)

    A stored cross-site scripting security issue in the Qlik Talend Administration Center has been identified.

    Affected Software

    • All versions of Qlik Talend Administration Center before Patch_20260123_QTAC-1883 (cumulative patch)_R2026-01_v1-8.0.1 are affected.

    See Security fix for Qlik Talend Administration Center cross-site scripting vulnerability (CVE-2026-pending) for details.

     

    Recommendation

    Upgrade at the earliest. The following table lists the patch versions addressing the vulnerabilities.

    Always update to the latest version. Before you upgrade, check if a more recent release is available.
     Product Patch Release Date
    Qlik Talend Administration Center 
    URL access control vulnerability
    QTAC-1471 November 21, 2025
    Qlik Talend Administration Center 
    cross-site scripting vulnerability
    QTAC-1883 January 23, 2026

     

    Thank you for choosing Qlik,
    Qlik Support

    Show Less
  • Image Not found
    blog

    Support Updates

    Qlik Automate: Automation ownership changes for Analytics Admins May 2026

    Qlik introduced a change in how automation permissions are handled for the Analytics Admin role. When was the change introduced? The change is already... Show More

    Qlik introduced a change in how automation permissions are handled for the Analytics Admin role.

    When was the change introduced?

    The change is already live as of the 11th of May, 2026.

     

    What does that mean for me?

    Analytics Admins can now claim ownership of another user's automation. After claiming ownership, they can make necessary changes to it and enable the automation. However, they can no longer transfer ownership to another user.

     

    How do I claim ownership of an automation?

    As an Analytics admin, to claim ownership of an automation:

    1. Navigate to the Automations section in the Administration Console
    2. Locate the automation you want to claim ownership of, and click the Actions menu (...)
    3. Choose Claim ownership

      claim ownership.png

     

    This behavior change only applies to Analytics Admins. Tenant admins can still transfer ownership to any user with the appropriate access rights in the tenant.

     

    If you have any questions, we're happy to assist. Reply to this blog post or take your queries to our Support Chat.

     

    Thank you for choosing Qlik,
    Qlik Support

    Show Less
  • qlik-nontechnicalblogs.jpg
    blog

    Explore Qlik Gallery

    Predicting London ULEZ effects on emissions.

    Predicting London ULEZ effects on emissions.C40 CitiesUse historic emissions data and the expansion of Londo's Ultra Low Emissions Zone program to pre... Show More
    Show Less
  • Image Not found
    blog

    Product Innovation

    From Raw Data to AI-Ready: Accelerating GenAI and Agentic Initiatives with Qlik ...

    Most enterprise AI projects don’t fail because the model is wrong. They fail because the data isn’t ready. Data engineering leaders are now being aske... Show More

    Most enterprise AI projects don’t fail because the model is wrong. They fail because the data isn’t ready. Data engineering leaders are now being asked to support a new wave of generative and agentic workloads that demand fresher data, broader source coverage, tighter governance, and richer context than traditional BI ever required — and to deliver it without growing the team.

    Qlik Talend Cloud Data Integration was built to close that gap. It provides a single, governed pipeline from operational sources to an open lakehouse — and on to the vector indexes, feature stores, and APIs that your AI systems actually consume. Combined with Qlik Open Lakehouse on Apache Iceberg, it turns your AI inputs into reusable AI data products: named, versioned, governed assets that any RAG application or agent can consume off the shelf.

    This post walks through the reference architecture, the pipeline that produces those data products, and a worked example that takes raw CRM and product data all the way to a working RAG copilot and an agentic workflow — both running off the same Iceberg foundation.

    Why data is the bottleneck for enterprise AI

    GenAI and agentic systems are not fundamentally different consumers of data, but they are far more demanding ones. A model is only as accurate, current, and trustworthy as the context it retrieves at inference time. For data engineering leaders, that translates into six hard requirements:

    • Freshness — Embeddings and agent context become stale quickly. Real-time CDC matters more than nightly batch.
    • Breadth — Useful AI requires content from CRMs, ticketing systems, document stores, ERPs, and operational databases — often dozens of sources per use case.
    • Quality — Bad data doesn’t just produce bad answers. It produces confidently wrong answers, which are worse.
    • Governance — PII, masking rules, lineage, and access controls must travel with the data into vector stores and tool calls, not stop at the warehouse boundary.
    • Openness — Locking AI-ready data into a proprietary store creates rework every time the model, framework, or query engine changes.
    • Reuse — Hand-rolling a new pipeline for every AI use case is how programs stall. The same curated data should serve a RAG copilot today and an agent tomorrow.

    Meeting all six at once with one-off pipelines is what kills enterprise AI velocity. The path forward is consolidation: one governed integration platform feeding one open lakehouse, with the Gold zone publishing reusable AI data products that any model, agent, or analyst can consume. Build once, govern once, serve many.

    Qlik Talend Cloud + Iceberg: a reference architecture

    The architecture has four layers: sources, integration, an open Iceberg lakehouse with medallion zones, and an AI serving layer. Qlik Talend Cloud handles change data capture, transformation, quality, and catalog metadata across the entire flow. The Gold zone is where curated outputs are published as named AI data products.

     

    Two design choices make this architecture work for AI specifically.

    First, the integration layer is real-time by default — log-based CDC keeps Bronze and Silver tables current without batch windows.

    Second, Gold is treated as a publishing surface, not a staging area. Each Gold data product is named, versioned, governed, and discoverable in the catalog. RAG and agents become two interfaces over the same products: built once, governed once, consumed many times.

    QTC_Manuel_3-1778276103233.png

     

     

     

    QTC_Manuel_0-1778275838297.png

     

    Figure 1. Reference architecture: Qlik Talend Cloud + open Iceberg lakehouse, serving RAG, agentic, and analytics workloads from the same governed Gold layer.

    The pipeline: from raw data to AI use

    The pipeline that operates on the architecture above runs in six stages — automated end-to-end, with quality and lineage enforced at every step. Each stage produces a more refined and trusted asset. Bronze preserves raw, append-only CDC for replay and audit. Silver applies data quality rules, deduplication, masking, and Type-2 history. Gold publishes AI data products: a document product (chunk-friendly text + metadata) for RAG, and a state product (curated entity, feature, and policy data) for agents. Both are versioned and registered, so consumers — vector indexers, semantic APIs, BI engines — read the same governed truth.

    QTC_Manuel_1-1778275838307.png

     

    Figure 2. The six-stage pipeline. Because every stage writes to Iceberg, downstream consumers — vector indexers, semantic APIs, BI engines — read the same governed truth.

    Worked example: from CRM tickets to a customer-support agent

    Picture a data engineering team chartered with delivering an AI-powered customer-support assistant. The use case has both a RAG side (deflecting common questions with vetted answers) and an agentic side (the assistant can look up customer status, open tickets, and trigger actions). The raw inputs are typical:

    • Salesforce — accounts, contacts, cases, case comments.
    • ServiceNow — incident records and resolution notes.
    • Confluence and SharePoint — a few thousand product KB articles.
    • Postgres operational DB — subscription and entitlement state.

    The pipeline at work

    1. Ingest. Qlik Talend Cloud uses log-based CDC to stream changes from Salesforce, ServiceNow, and Postgres in real time. KB articles are pulled on a connector schedule with content-hash detection so only changed docs flow through.
    2. Land in Bronze. Every change is written to append-only Iceberg tables in cloud object storage, partitioned by source and ingestion date. The raw audit trail is preserved for replay.
    3. Standardize in Silver. Push-down ELT cleanses text, masks PII (customer email, phone), conforms keys and status codes, and applies Type-2 history to entity tables (customer, case, entitlement, interaction). Trust scores are written alongside each table.
    4. Publish two Gold data products. rag_documents — KB articles + anonymized resolution notes from closed tickets, pre-joined and metadata-tagged for retrieval. agent_state — a fused customer_360 view, current entitlement state, and a small policy_rules table that defines what actions agents are allowed to take. Both are versioned, lineage-tracked, and registered in the catalog.
    5. Vectorize. rag_documents is chunked and embedded into a managed vector index with metadata filters for product, language, and access tier. The job is incremental — only new and changed rows of the data product trigger re-embedding.
    6. Serve and audit. agent_state is exposed via a thin Semantic API and parameterized SQL endpoints, ready for agent tool calls. Every agent action is written back to an audit_log Iceberg table — inputs, decision, tool call, outcome — so the same lakehouse that grounds the agent also makes its behavior explainable.

    Powering RAG

    When a customer asks “Why was my last bill higher than usual?”, the copilot retrieves the top-k chunks from the rag_documents data product, filtered by the customer’s product entitlement — with a structured lookup against agent_state for the customer’s current invoice context. Because the underlying data products are continuously refreshed by Qlik Talend Cloud, the copilot cites guidance that reflects the current pricing schedule, not last month’s. Every retrieved chunk carries its lineage, so answers can be traced back to a specific source row in Salesforce or a specific KB article version.

    Powering agentic workflows

    For agentic flows, the assistant plans and executes multi-step tasks against the same agent_state product: confirm identity, check entitlement, open a case in Salesforce via a write-back tool, and escalate to a human agent if confidence drops below a threshold defined in policy_rules. Every step is recorded in the audit_log table for explainability. The agent’s tools are backed by exactly the same data products the RAG side uses — which means a behavior change in the data, like a new product or pricing tier, propagates to both surfaces immediately, with no parallel pipelines and no copy-paste schemas. RAG and agents really are two interfaces over one set of products.

    From pipeline to production: your next move

    The fastest enterprise AI programs aren’t the ones with the cleverest prompts or the largest models. They’re the ones treating AI data products as the unit of delivery. Qlik Talend Cloud and Qlik Open Lakehouse give your team three things at once: real-time movement of broad source data, governed transformation into named and versioned data products, and an open Iceberg foundation that any model, framework, or agent can plug into. Build once, govern once, serve both RAG and agents from the same products.

    A 10–15 day starting sprint for data engineering leaders:

    • Pick one use case with two surfaces. Choose a domain where you need both a RAG copilot and a constrained agent (one or two write actions). Working backward from both surfaces forces the right data product shape.
    • Stand up CDC into Iceberg. Wire two or three high-value sources into Bronze via Qlik Talend Cloud, build the Silver entity layer, and publish two Gold data products: one for retrieval, one for action.
    • Measure freshness, trust, and reuse. Track event-to-context latency (freshness), quality-rule pass rate (trust), and how many AI surfaces consume the same Gold products (reuse). These three numbers tell you whether the pattern is ready to scale to the next domain — as configuration, not reinvention.

    Talk to your Qlik team. Ask about the AI-ready data solution templates — pre-built pipeline patterns for the most common GenAI and agentic use cases, including the customer-service pattern walked through above.

    Show Less
  • Image Not found
    blog

    Product Innovation

    Unlocking Qlik Open Lakehouse Access from Talend Studio

    Native Qlik Open Lakehouse interoperability for Talend Studio With the March release, Talend Studio introduces native support for querying Qlik Open L... Show More


    Native Qlik Open Lakehouse interoperability for Talend Studio

    With the March release, Talend Studio introduces native support for querying Qlik Open Lakehouse datasets through Amazon Athena — available in both Standard Data Integration jobs and Spark-based Big Data workflows.

    This means developers can now connect to Qlik Open Lakehouse data, execute SQL queries, and integrate results downstream the Talend job without manual JDBC configuration or custom setup.

    Connecting Talend Studio to Qlik Open Lakehouse

    Talend Studio now connects natively to Qlik Open Lakehouse through Amazon Athena — a SQL query engine that runs directly on top of cloud storage, enabling access to Iceberg-managed data without data movement or duplication. Developers can:

    • Access Qlik Open Lakehouse data with an out-of-the-box configuration,  no manual JDBC setup required
    • Execute SQL queries directly within Talend jobs (Standard and Big Data)
    • Integrate Qlik Open Lakehouse data into existing Talend jobs without disrupting current workflows

    Reliable by Design

    Connecting to Qlik Open Lakehouse from Talend Studio is straightforward by design. The integration ships with dedicated Athena configuration and input components, eliminating manual setup. Runtime validation, improved error handling, and secure credential management ensure the connection remains stable and trustworthy in production environments.

    How Data is Organized in Qlik Open Lakehouse

    In Qlik Open Lakehouse, data is ingested incrementally and accumulated in Apache Iceberg tables. A logical abstraction layer — implemented as Trino views — resolves those changes into a consolidated latest-state representation, which different engines can query without handling change consolidation logic directly.

    This model supports two complementary data patterns:

    • Current-state access (SCD Type 1): query the latest-state view through Athena for operational and integration use cases
    • Full history access (SCD Type 2): query the underlying Iceberg tables directly for time-aware and audit analysis

    Both patterns are available across Standard Data Integration and Big Data jobs in Talend Studio, enabling teams to work with Qlik Open Lakehouse data in the way that best suits their use case.

     

    RMartins_0-1778184441104.png

    Looking Ahead

    This integration enables Talend Studio users to access Qlik Open Lakehouse data without changing their existing workflows — while aligning with modern, open-format architectures that support multiple query engines.

    Athena is the first fully supported access path in this model, with a roadmap to extend support to additional engines over time. For organizations moving away from traditional data warehouses or adopting multi-engine strategies, this represents a concrete step toward a more flexible data architecture.

     

    Show Less
  • Image Not found
    blog

    Support Updates

    Watch! Q&A with Qlik: Qlik Cloud Migration

    Don't miss our latest Q&A with Qlik! Pull up a chair and chat with our panel of experts to help you get the most out of your Qlik experience.   WATCH ... Show More

    Don't miss our latest Q&A with Qlik! Pull up a chair and chat with our panel of experts to help you get the most out of your Qlik experience.

     

    WATCH HERE

     

     

    QnARecording.png

    Show Less
  • Image Not found
    blog

    Japan

    SAP S/4HANA 移行の「壁」を突破する!テストデータ管理を劇的に変える!

    本コラムでは、先日開催された TechEd 2025 にて、Qlik の Miguel Antunes が提唱した「スマートなテストと迅速な移行」をテーマに、次世代の SAP データ管理手法を紐解きます。   1. 移行プロジェクトを阻む「テストデータの三重苦」 S/4HANA への移行は、単なるソ... Show More
    本コラムでは、先日開催された TechEd 2025 にて、Qlik の Miguel Antunes が提唱した「スマートなテストと迅速な移行」をテーマに、次世代の SAP データ管理手法を紐解きます。
     

    1. 移行プロジェクトを阻む「テストデータの三重苦」

    S/4HANA への移行は、単なるソフトウェアのアップグレードではなく、データモデルそのものの刷新を伴います。そのため、本番に近い環境でのテストが不可欠ですが、現場では以下の課題が頻出しています。
     
     

    2. Qlik Gold Client が提供する「アジャイル」なデータ管理

    これらの課題を解決するのが、SAP 認定のテストデータ管理プラットフォーム「Qlik Gold Client」です。従来の「全部コピー」という力技ではなく、「必要なデータだけを、スライスして、同期する」というアプローチに切り替えます。
     
    主な特徴:
     
     

    3. S/4HANA 移行における具体的メリット

    Qlik Gold Client を導入することで、移行プロジェクトの ROI は劇的に向上します。
     

     

    まとめ:2027年に向けて

    SAP S/4HANA への移行を「単なる苦行」にするか「ビジネス変革の好機」にするか。その鍵は、データの扱い方にあります。
     
    Qlik Gold Client は、RISE with SAP や S/4HANA Cloud(プライベートエディション)にも対応しており、SAP Store でも提供されています。膨大なデータに足を取られる前に、データ管理の「スマート化」を検討してみてはいかがでしょうか。

    詳細はこちら: Qlik Gold Client ページ

    本ソリューションのポイント
    Show Less
  • Image Not found
    blog

    Design

    The New Write Table

    The write table was introduced to Qlik Cloud Analytics last month so in this blog post, I will review how it works and how it can be added to an app. ... Show More

    The write table was introduced to Qlik Cloud Analytics last month so in this blog post, I will review how it works and how it can be added to an app. The write table looks like the straight table but editable columns can be added to it to update or add data. The updated/added data is visible by other users of the app provided they have the correct permissions. Read more on write table permissions here. Something else to note, if using a touch screen device, is you will have to disable touch screen mode for the write table to work. Looking at the write table for the first time, I found it intuitive and easy to use. Let’s create a write table with some editable columns to see how easy it is.

    The write table object can be added to a sheet like any other visualization. Once it is added, columns can be added the same way dimensions and measures are added to a straight table. Below is a small write table with course information including the course ID, course name, instructor and location.

    write table.png

    To add an editable column from the properties panel, click on the plus sign (+) and select Editable column.

    editable.png

     

     

     

     

     

     

     

    The new editable column will be added. In the properties for the column, the title for the column can be modified and from the show content drop down, manual user input or single selection can be selected. Manual user input will create a free form column that the user can type into. The single selection option will allow me to create a drop-down list of options that the user can choose from.

    single selection.png

    I will change the title to Course Level and for show content I will select single selection and add three list items by typing the list item and then clicking on the plus sign to add it to the list. The list items will be displayed in the drop-down in the order they are added but can be rearranged by hovering over the list-item and dragging it to the desired position. List-items can also be deleted by hovering over it and clicking the delete icon that appears to the left.

    list items.png

    When you come out of edit mode, the message below will appear for the editable column prompting you to define a set of primary keys.

    define.png

    Once you click Define, you will see the pop-up below where you can select the column(s) that will be used for the unique primary key. This is necessary to save and map the data entered in the editable column to the data model. I will select the CourseID column as the primary key.

    message.png

     Once this is done, I will see the Course Level column with the drop-down of list-items I added.

    dropdown.png

    Let’s add one more editable column that takes manual user unput and name it Notes.

    notes.png

    As I add data or update the editable columns, the cells will be flagged orange to indicate that my edits have not been saved. Once I save the table, they will be flagged green and any new values entered are visible to other users. A cell will be blue if another user is currently making changes to the row, thus locking it. Changes are saved for 90 days in a change store (temporary storage location) provided by Qlik. After 90 days, the data will be deleted. It is also important to note that if an editable column is deleted, the data will be lost. This is also the case if the primary key used for the editable column is removed.

    save.png

    It is possible to retrieve the changes from a change store via the change-stores API or an automation. Using the REST connection and the change-store API, the changes made in a write table can be retrieved and stored in a QVD (if needed for more than 90 days) or added to the data model for use in other analytics. Qlik Automate can also be used to retrieve data from the change-store using the List Current Changes From Change Store block or the List Change Store History block. From there the data can be stored permanently in an external system for later use or used in the automation for another process. Qlik Help offers steps for retrieving data from a change-store.

    The write table can make it easy for users to add updates, feedback and important information that may not be available in the data model. Not only can this be done quickly, but it can be immediately visible to other colleagues. Learn more about the write table in the Product Innovation blog along with links to videos and write table FAQs.

    Thanks,

    Jennell

    Show Less
  • Image Not found
    blog

    Design

    Do More with Qlik

    Today I want to introduce you to a gem that you may be missing out on.  It is the Do More with Qlik community forum lead by @Michael_Tarallo . This fo... Show More

    Today I want to introduce you to a gem that you may be missing out on.  It is the Do More with Qlik community forum lead by @Michael_Tarallo . This forum is made up of concise videos that cover everything from Qlik capabilities to innovative ways to solve business challenges. It is for users of all levels, beginners to seasoned Qlik users, with a wide range of topics. Check out this introductory video to learn more and bookmark the forum.  You do not want to miss out on this!

    Thanks,

    Jennell

    Show Less
  • Image Not found
    blog

    Qlik Academic Program

    Beyond Dashboards: Preparing Students for the Future of AI-Powered Analytics

    As artificial intelligence continues to transform industries, universities are increasingly exploring how to prepare students for this shift. Modern a... Show More

    As artificial intelligence continues to transform industries, universities are increasingly exploring how to prepare students for this shift. Modern analytics is no longer only about looking at what happened in the past. It is about identifying patterns, predicting outcomes, automating processes, and helping people make faster and more informed decisions.

    One major change is the growing use of conversational analytics. Instead of manually navigating dashboards and filtering reports, users can increasingly ask questions in natural language and receive contextual insights based on trusted data. This makes analytics more accessible to a wider range of users and helps students engage with data in a more intuitive and interactive way.

    Another important development is predictive analytics. Rather than only analyzing historical information, students can now learn how to forecast trends, identify anomalies, and anticipate future outcomes using AI-supported tools and techniques. These skills are becoming increasingly valuable across industries such as finance, healthcare, marketing, operations, manufacturing, and supply chain management.

    At the same time, the rise of AI is also highlighting the importance of trusted and governed data. AI systems are only as effective as the quality and context of the data behind them. As Qlik highlights in its recent Agentic AI presentation, successful AI depends not only on AI capability itself, but also on trusted data, analytical context, and governance.

    This shift creates a valuable opportunity for educators to modernize analytics education and expose students to the technologies and workflows increasingly used in industry. Instead of treating analytics as a static reporting exercise, universities can introduce students to conversational analytics, predictive thinking, AI-assisted insights, and intelligent decision-making.

    Through the Qlik Academic Program, accredited university educators and students receive free access to Qlik Sense Cloud, including the full capabilities of a Qlik Sense tenant. This allows students to gain hands-on experience with interactive dashboards, data exploration, AI-powered analytics features, automation, and predictive analytics tools in a real-world environment.

    The program also provides free access to Qlik Learning, where educators and students can follow structured learning pathways and complete product qualifications to strengthen their analytics and data literacy skills. In addition, educators receive ready-to-use teaching resources including lesson plans, presentations, exercises, and classroom materials that can help integrate analytics into existing courses more easily.

    Importantly, these resources are designed not only for technical programs, but also for business, marketing, operations, finance, and other non-technical disciplines where data literacy is becoming increasingly essential.

    As AI continues to reshape the workplace, helping students understand how to work with data, analytics, and AI together will become more important than ever. Universities now have an opportunity to move beyond teaching dashboard creation alone and instead prepare students to become confident, curious, and data-driven decision makers in an increasingly AI-powered world.

    If you are interested in learning more about the Qlik Academic Program, feel free to contact us at academicprogram@qlik.com. More information about the program, including how to apply, can be found at qlik.com/academicprogram.

    Show Less
  • Image Not found
    blog

    Support Updates

    Upcoming removal of GET method for /api/v1/apps in Qlik Cloud Analytics, May 202...

    Update, 6th of May 2026: This is now deployed in all regions.   Previously, the /api/v1/apps endpoint could be used to list all apps on a tenant. This... Show More
    Update, 6th of May 2026: This is now deployed in all regions.

     

    Previously, the /api/v1/apps endpoint could be used to list all apps on a tenant. This method has always been unsupported and undocumented, and will be removed in the first week of May 2026.

     

    How will this affect me?

    If you are currently using /api/v1/apps, switch to GET with /api/v1/items instead.

    This can be further filtered by choosing a resource type (such as /items?noActions=true&resourceType=app, /items?noActions=true&resourceType=script, or similar).

    What about similar requests?

    • GET with /api/v1/apps/APPID for a specific app will keep on working
    • POST with /api/v1/apps will also keep on working

     

    For more information, see:

     

    If you have any questions, we're happy to assist. Reply to this blog post or take your queries to our Support Chat.

     

    Thank you for choosing Qlik,
    Qlik Support

    Show Less
  • Image Not found
    blog

    Design

    Finance Report with Waterfall Chart

    Several years ago, I wrote a blog post on how to create a profit and loss statement in QlikView. @Patric_Nordstrom has built upon this method and buil... Show More

    Several years ago, I wrote a blog post on how to create a profit and loss statement in QlikView. @Patric_Nordstrom has built upon this method and built a financial statement in Qlik Analytics with a straight table and waterfall chart using inline SVG. In this blog post, I will review how he did it.

    Here is an example of the financial statement structure.

    structure.png

     

     

     

     

     

     

     

     

     

     

     

    There are plain rows, such as gross sales and sales return where the amount is the sum of the transactions made against the accounts. There are subtotals such as net sales and gross margin which are a sum of the previous plain rows. And there are also partial sums such as total cost of sales that is a sum of a subset of the previous plain rows, but not all the previous plain rows.

    Patric identified two functions, RangeSum() and Above() that are suitable for calculating the subtotal and partial sums in a table. The RangeSum function sums a range of values, and the Above function evaluates an expression at a row above the current row within a column segment in a table. The above function can be used with 2 optional parameters – offset and count to further identify the rows to be used in the expression.

    The layout table below is used as a template for the financial statement.

    excel3.png

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

    • The RowStyle column is used for styling the rows making text bold or underlining it. Currently, we cannot use tab or blank tags but hopefully in the future this will become available in the straight table.
    • The RowTitle is the account category that is to be used in the financial statement.
    • RowType is used to as input to calculate the offset and count for the above function.
    • AC is the actual amount.

    The AC column is included here in the layout file for demo purposes but could be calculated from the accounts and transactions in the data model as well.

    In the script, the layout table was loaded, and additional fields were created to support the waterfall chart, specifically offset and count fields to be used with the above function.

    script2.png

     

    Here is a view of the layout table with the new fields that were created in the script.

    complete layout table.png

     

    After the layout table is loaded and the new fields are created, some master measures can be created to be used in the inline SVG expression. Here are the 3 master measures Patric created:

    mBar is the bar length with an offset that is always 0.

    mBar.png

     


    mStart is the starting position of the bar in the waterfall chart and for subtotals, this is always 0.

    mStart.png

     


    mMax is the max bar length which is used to scale the bars in the waterfall chart.

    mMax.png

     

    Now the straight table can be created. The RowNr field is added for sorting purposes. The RowTitle field and the AC fields are added to show the account groupings in the financial statement along with their value. The below inline SVG expression for the waterfall chart is the last column added to the straight table. It is made up of 3 parts:

    1. A plain line with an offset
    2. A thin gray line where x=0
    3. A label

    inline svg.png

     

    • The if statement on line 1 will determine if a bar is displayed. Bars will only be visible if the RowStyle is not blank.
    • Line 2 has the viewBox settings and sets the 0 for the x-axis.
    • On line 3 is the light gray line where x=0 and it is displayed on all non-blank rows of the financial statement.
    • Lines 4 and 5 in the yellow box is the plain line with the offset, scaled using the mMax measure to control the length of the line.
    • Line 6 handles the bar color, light green for positive values and red for negative values.
    • On line 7, the if statement is used to set the stroke width of the bar. A thin line is used if the RowType is retrosum and a wider line is used for all other bars.
    • In the red box, on lines 10 through 13, the label text is set and placed either to the left or right of the bar depending on the value. Positive values are placed to the right of the bar and negative values are placed to the left of the bar.

    The result of the financial statement looks like this:

    final.png

     

    To add the text styling (bold and underline) from the layout table, the RowStyle field was added to the text style expression in the RowTitle and AC columns.

    rowstyle.png

     

     

    Indentation is added by using the repeat function in the RowTitle column. It will repeat a non-breaking space 6 times if there is a tab tag in the RowStyle field. Otherwise, no indentation is done.

    indent.png

     

    If the RowStyle is not blank, a bar is displayed for the waterfall chart and the sum value for the actual amount (or mBar) in this case is displayed.

    blank.png

     

    The chart column representation is set to Image in the properties of the straight table.

    image.png

     

     

    While this method looks complex, it is a simple and clean solution for adding a waterfall chart to a financial statement using straight table features and inline SVG. Using the layout table and inline SVG provides room for customization so that the financial report meets the needs and requirements of the user or customer.

    Thanks,

    Jennell

    Show Less
  • Image Not found
    blog

    Explore Qlik Gallery

    Color Palette

      Color Palette Nitin This Dashboard defines a set of predefined colors used to help maintain consistency and clarity across the dashboard, it... Show More
    Show Less
  • qlik-nontechnicalblogs.jpg
    blog

    Explore Qlik Gallery

    Knowledge Nuggets 2026(Q1)

    Knowledge Nuggets - 2026(Q1) Insight Consulting A collection of Qlik and Data & AI Literacy related videos and articles Discoveries Contri... Show More
    Show Less