Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Qlik offers a wide range of channels to assist you in troubleshooting, answering frequently asked questions, and getting in touch with our technical experts. In this article, we guide you through all available avenues to secure your best possible experience.
For details on our terms and conditions, review the Qlik Support Policy.
Index:
We're happy to help! Here's a breakdown of resources for each type of need.
Support | Professional Services (*) | |
Reactively fixes technical issues as well as answers narrowly defined specific questions. Handles administrative issues to keep the product up-to-date and functioning. | Proactively accelerates projects, reduces risk, and achieves optimal configurations. Delivers expert help for training, planning, implementation, and performance improvement. | |
|
|
(*) reach out to your Account Manager or Customer Success Manager
Your first line of support: https://community.qlik.com/
Looking for content? Type your question into our global search bar:
Leverage the enhanced and continuously updated Knowledge Base to find solutions to your questions and best practice guides. Bookmark this page for quick access!
Subscribe to maximize your Qlik experience!
The Support Updates Blog
The Support Updates blog delivers important and useful Qlik Support information about end-of-product support, new service releases, and general support topics. (click)
The Qlik Design Blog
The Design blog is all about product and Qlik solutions, such as scripting, data modelling, visual design, extensions, best practices, and more! (click)
The Product Innovation Blog
By reading the Product Innovation blog, you will learn about what's new across all of the products in our growing Qlik product portfolio. (click)
Q&A with Qlik
Live sessions with Qlik Experts in which we focus on your questions.
Techspert Talks
Techspert Talks is a free webinar to facilitate knowledge sharing held on a monthly basis.
Technical Adoption Workshops
Our in depth, hands-on workshops allow new Qlik Cloud Admins to build alongside Qlik Experts.
Qlik Fix
Qlik Fix is a series of short video with helpful solutions for Qlik customers and partners.
Suggest an idea, and influence the next generation of Qlik features!
Search & Submit Ideas
Ideation Guidelines
Get the full value of the community.
Register a Qlik ID:
Incidents are supported through our Chat, by clicking Chat Now on any Support Page across Qlik Community.
To raise a new issue, all you need to do is chat with us. With this, we can:
Log in to manage and track your active cases in the Case Portal. (click)
Please note: to create a new case, it is easiest to do so via our chat (see above). Our chat will log your case through a series of guided intake questions.
When creating a case, you will be prompted to enter problem type and issue level. Definitions shared below:
Select Account Related for issues with your account, licenses, downloads, or payment.
Select Product Related for technical issues with Qlik products and platforms.
If your issue is account related, you will be asked to select a Priority level:
Select Medium/Low if the system is accessible, but there are some functional limitations that are not critical in the daily operation.
Select High if there are significant impacts on normal work or performance.
Select Urgent if there are major impacts on business-critical work or performance.
If your issue is product related, you will be asked to select a Severity level:
Severity 1: Qlik production software is down or not available, but not because of scheduled maintenance and/or upgrades.
Severity 2: Major functionality is not working in accordance with the technical specifications in documentation or significant performance degradation is experienced so that critical business operations cannot be performed.
Severity 3: Any error that is not Severity 1 Error or Severity 2 Issue. For more information, visit our Qlik Support Policy.
If you require a support case escalation, you have two options:
When other Support Channels are down for maintenance, please contact us via phone for high severity production-down concerns.
A collection of useful links.
Qlik Cloud Status Page
Keep up to date with Qlik Cloud's status.
Support Policy
Review our Service Level Agreements and License Agreements.
Live Chat and Case Portal
Your one stop to contact us.
Does a Qlik Sense pivot table have a hard limit on its dimensions and measures?
Qlik Sense pivot tables have a limit of 1000 measures and 1000 dimensions.
Approaching this limit is not recommended. Managing measures and dimensions of this volume will become difficult and impractical.
When you need to integrate auth0 JWT Bear Token auth with Talend tRestRequest component, it is possible to use JWT Bearer Token with Keystore Type : Java Keystore *.jks to achive this.
Please follow the some similar steps from Obtaining a JWT from Microsoft Entra ID | Qlik Help
-----BEGIN CERTIFICATE-----
MGLqj98VNLoXaFfpJCBpgB4JaKs
-----END CERTIFICATE-----
keytool -import -keystore talend-esb.jks -storepass changeit -alias talend-esb talend-esb.cer -noprompt
Security: JWT Bearer Token
Keystore File: /path_to/talend-esb.jks
Keystore Password : changeit
Keystore Alias : talend-esb
Audience: "https://dev-xxxx.us.auth0.com/api/v2/"
Integration fails with the following error:
tap - CRITICAL 'search_prefix'
tap - Traceback (most recent call last):
tap - File "/code/tap-env/bin/tap-s3-csv", line 10, in <module>
tap - sys.exit(main())
tap - ^^^^^^
tap - File "/code/tap-env/lib/python3.12/site-packages/singer/utils.py", line 235, in wrapped
tap - return fnc(*args, **kwargs)
tap - ^^^^^^^^^^^^^^^^^^^^
tap - File "/code/tap-env/lib/python3.12/site-packages/tap_s3_csv/__init__.py", line 81, in main
tap - config['tables'] = validate_table_config(config)
tap - ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
tap - File "/code/tap-env/lib/python3.12/site-packages/tap_s3_csv/__init__.py", line 63, in validate_table_config
tap - table_config.pop('search_prefix')
tap - KeyError: 'search_prefix'
main - INFO Tap exited abnormally with status 1
main - INFO No tunnel subprocess to tear down
main - INFO Exit status is: Discovery failed with code 1 and error message: "'search_prefix'".
Improving the integration to gracefully handle the missing key when an update to the connection/config occurs is currently on the roadmap. The R&D team is working on this behavior and a minor version upgrade is expected down the line; however, there is currently no ETA.
If you encounter this error with your AWS S3 CSV integration, please reach out to Qlik Support for further assistance.
The issue occurs due to missing keys in the configuration. Specifically, an update to the connection settings removed or modified the search_prefix, resulting in the key being absent from the config expected by the integration.
The following IBM mainframe error occurs with the ARC CDC Solutions endpoint when the wrong ARC installation files are used:
Daemon (ATTDAEMN): 09.30.44 STC06957 ASTB1001E CDC library provided in STEPLIB but ATYLIB DD card missing - no CDC 09.30.44 STC06957 ASTB1012E ERROR=ABEND in effect
Sp15-620271-ARC-mvs.zip is confirmed to be the full patch while ARC_620271_mvs.zip is a partial patch.
Use ARC installation files that have SP prefixes in the file name.
ARC installation files without the SP prefixes are partial installation files that may not contain all the components that you need.
The measure value is suddenly not displayed correctly, but it started to show null value instead. It used to show a proper data until Nov 2024 release or earlier.
The issue is observed when:
QCB-33031 will be fixed in Qlik Cloud at the first opportunity. Qlik Sense Enterprise on Windows will have the fix introduced in the next major release in 2026.
Apply aggregation or Only()
function to the affected measure.
For Qlik Sense Enterprise on Windows, add the parameter UseTableEvaluatorWhenApplicable=0 to the Qlik Sense engine settings.ini file. The change must be done on all engines, if more than one is in use.
The setting indicates engine to use the old way to calculate the measures without aggregation. It is slower, but the behavior remains what it was back in Qlik Sense November 2024 and earlier.
See How to modify Qlik Sense Engine's Settings.ini for reference.
The difference in behavior is due to an internal change in the Qlik Sense Engine introduced to improve performance.
Previously, when a measure was defined directly as a field without an aggregation, the Engine automatically applied the Only()
function. This function returns a value if the field contains only a single distinct (non-null) value.
However, appending Only()
function leads to slow performance. To optimize performance, the Engine now uses a more efficient calculation method. As a side effect, when a field contains one valid value and one null value, the result is returned as null instead of the expected value.
This issue has been identified as QCB-33031, and will be fixed to display the proper value instead of null in this scenario.
QCB-33031
You may encounter this issue in which your Stitch integrations remain paused after your account has been renewed, reactivated after a 40-day inactivity pause | Qlik Stitch Documentation, or your account displays the error "You are over your current plan destination limit of 1. To continue loading data, remove any excess destinations or upgrade."
Even though your account has been modified, the Stitch account itself has not fully updated and is preventing account activity.
To resolve, please reach out to the Qlik Support team. They will be able to review your account if unpausing it will be possible or if you will need to discuss the status of your contract with your account management team.
The Stitch account is not capturing the changes to your billing and contract details.
Google Ads extractions fail with:
tap - CRITICAL 504 Deadline Exceeded
tap - grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
tap - status = StatusCode.DEADLINE_EXCEEDED
tap - details = "Deadline Exceeded"
tap - debug_error_string = "UNKNOWN:Error received from peer {grpc_status:4, grpc_message:"Deadline Exceeded"}"
tap - >
main - INFO Exit status is: Discovery succeeded. Tap failed with code 1 and error message: "504 Deadline Exceeded". Target succeeded.
This error is often transient and will not require any action to alleviate. If the error is persistent, review your Tables to Replicate and consider de-selecting unneeded tables or columns. If the issue remains, please reach out to Qlik Support to discuss your use-case and further review the integration settings.
The error "504 Deadline Exceeded" indicates that the extraction timed out due to a lack of response from the Google Ads API. By default, Stitch allows up to 15 minutes for a response before terminating the request.
Possible reasons the Google Ads API may exceed this threshold include:
Stitch Support frequently receives questions regarding the invocation of the PUBLIC role with Snowflake. When setting up a database user following Create a Stitch database and database user (Qlik Stitch Documentation), users will notice that the Stitch user executes GRANT statements on the PUBLIC role. This behavior can raise questions about role-based access and security implications within Snowflake.
Manually adjust permissions in Snowflake as needed. If you prefer Stitch offers a more streamlined approach in its behavior, please submit a feature request. Refer to New Process for Submitting a Feature Request for All Talend Customers and Partners on how to submit a feature request.
By default, Stitch grants the PUBLIC role access to schemas and objects it creates in Snowflake. This behavior often raises questions from users who are concerned about broad access permissions.
The reason Stitch does this is because it cannot assume which specific roles or users in your organization should have access to the data. Granting access to the PUBLIC role ensures that Stitch can write data successfully without making assumptions about your internal role structure.
This default behavior is not a requirement from Snowflake itself, but rather a design decision by Stitch to simplify initial setup and avoid permission-related sync failures.
If this approach does not align with your organization’s security policies, you may manually revoke access from the PUBLIC role after the initial sync. However, this step must be repeated each time a new integration runs or a new schema is created, which may not be scalable.
Snowflake supports granular permission control via the REVOKE command, allowing you to adjust access as needed:
🔗 REVOKE <privileges> … FROM ROLE | docs.snowflake.com
While this manual revocation process works, it requires ongoing attention. If tighter access control is a priority and manual intervention isn’t feasible, you may want to consider alternative destinations or workflows.
Qlik Sense Enterprise Client-Managed offers a range of Monitoring Applications that come pre-installed with the product.
Qlik Cloud offers the Data Capacity Reporting App for customers on a capacity subscription, and additionally customers can opt to leverage the Qlik Cloud Monitoring apps.
This article provides information on available apps for each platform.
The Data Capacity Reporting App is a Qlik Sense application built for Qlik Cloud, which helps you to monitor the capacity consumption for your license at both a consolidated and a detailed level. It is available for deployment via the administration activity center in a tenant with a capacity subscription.
The Data Capacity Reporting App is a fully supported app distributed within the product. For more information, see Qlik Help.
The Access Evaluator is a Qlik Sense application built for Qlik Cloud, which helps you to analyze user roles, access, and permissions across a tenant.
The app provides:
For more information, see Qlik Cloud Access Evaluator.
The Answers Analyzer provides a comprehensive Qlik Sense dashboard to analyze Qlik Answers metadata across a Qlik Cloud tenant.
It provides the ability to:
For more information, see Qlik Cloud Answers Analyzer.
The App Analyzer is a Qlik Sense application built for Qlik Cloud, which helps you to analyze and monitor Qlik Sense applications in your tenant.
The app provides:
For more information, see Qlik Cloud App Analyzer.
The Automation Analyzer is a Qlik Sense application built for Qlik Cloud, which helps you to analyze and monitor Qlik Application Automation runs in your tenant.
Some of the benefits of this application are as follows:
For more information, see Qlik Cloud Automation Analyzer.
The Entitlement Analyzer is a Qlik Sense application built for Qlik Cloud, which provides Entitlement usage overview for your Qlik Cloud tenant for user-based subscriptions.
The app provides:
For more information, see The Entitlement Analyzer.
The Reload Analyzer is a Qlik Sense application built for Qlik Cloud, which provides an overview of data refreshes for your Qlik Cloud tenant.
The app provides:
For more information, see Qlik Cloud Reload Analyzer.
The Report Analyzer provides a comprehensive dashboard to analyze metered report metadata across a Qlik Cloud tenant.
The app provides:
For more information, see Qlik Cloud Report Analyzer.
Do you want to automate the installation, upgrade, and management of your Qlik Cloud Monitoring apps? With the Qlik Cloud Monitoring Apps Workflow, made possible through Qlik's Application Automation, you can:
For more information and usage instructions, see Qlik Cloud Monitoring Apps Workflow Guide.
The OEM Dashboard is a Qlik Sense application for Qlik Cloud designed for OEM partners to centrally monitor usage data across their customers’ tenants. It provides a single pane to review numerous dimensions and measures, compare trends, and quickly spot issues across many different areas.
Although this dashboard is designed for OEMs, it can also be used by partners and customers who manage more than one tenant in Qlik Cloud.
For more information and to download the app and usage instructions, see Qlik Cloud OEM Dashboard & Console Settings Collector.
With the exception of the Data Capacity Reporting App, all Qlik Cloud monitoring applications are provided as-is and are not supported by Qlik. Over time, the APIs and metrics used by the apps may change, so it is advised to monitor each repository for updates and to update the apps promptly when new versions are available.
If you have issues while using these apps, support is provided on a best-efforts basis by contributors to the repositories on GitHub.
The Operations Monitor loads service logs to populate charts covering performance history of hardware utilization, active users, app sessions, results of reload tasks, and errors and warnings. It also tracks changes made in the QMC that affect the Operations Monitor.
The License Monitor loads service logs to populate charts and tables covering token allocation, usage of login and user passes, and errors and warnings.
For a more detailed description of the sheets and visualizations in both apps, visit the story About the License Monitor or About the Operations Monitor that is available from the app overview page, under Stories.
Basic information can be found here:
The License Monitor
The Operations Monitor
Both apps come pre-installed with Qlik Sense.
If a direct download is required: Sense License Monitor | Sense Operations Monitor. Note that Support can only be provided for Apps pre-installed with your latest version of Qlik Sense Enterprise on Windows.
The App Metadata Analyzer app provides a dashboard to analyze Qlik Sense application metadata across your Qlik Sense Enterprise deployment. It gives you a holistic view of all your Qlik Sense apps, including granular level detail of an app's data model and its resource utilization.
Basic information can be found here:
App Metadata Analyzer (help.qlik.com)
For more details and best practices, see:
App Metadata Analyzer (Admin Playbook)
The app comes pre-installed with Qlik Sense.
Looking to discuss the Monitoring Applications? Here we share key versions of the Sense Monitor Apps and the latest QV Governance Dashboard as well as discuss best practices, post video tutorials, and ask questions.
LogAnalysis App: The Qlik Sense app for troubleshooting Qlik Sense Enterprise on Windows logs
Sessions Monitor, Reloads-Monitor, Log-Monitor
Connectors Log Analyzer
All Other Apps are provided as-is and no ongoing support will be provided by Qlik Support.
MySQL source tables with invisible columns crash a Qlik Replicate task on start. The following error is logged:
[INFRASTRUCTURE ]E: Process crashed with signal 11, backtrace: !{/opt/attunity/replicate/lib/at_base.so!4db3bd,/opt/attunity/replicate/lib/at_base.so!3a3938,/opt/attunity/replicate/lib/at_base.so!505053,/usr/lib64/libc.so.6!4e5b0,/opt/attunity/replicate/lib/libarepmysql.so!2b931,/opt/attunity/replicate/lib/libarepmysql.so!2e094,/opt/attunity/replicate/lib/libarepmysql.so!2e3c9,/opt/attunity/replicate/lib/libarepbase.so!4cecbb,/opt/attunity/replicate/lib/libarepbase.so!4d814e,/opt/attunity/replicate/lib/libarepbase.so!4681e2,/opt/attunity/replicate/lib/libarepbase.so!55fd36,/opt/attunity/replicate/lib/libarepbase.so!560108,/opt/attunity/replicate/lib/libarepbase.so!56b0ee,/opt/attunity/replicate/lib/libarepbase.so!5a1bca,/opt/attunity/replicate/lib/libarepbase.so!77d496,/usr/lib64/libpthread.so.0!81ca,/usr/lib64/libc.so.6!398d3,}! [1000100] (at_system_posix.c:575)
This issue is caused by defect RECOB-10177. Future patches for Qlik Replicate will incorporate the handling of invisible columns for MySQL tables.
2025.5 SP03 and newer versions.
Qlik Replicate could not handle invisible columns for the MySQL source.
2025.5 SP03
Qlik Sense Repository service will log performance counters, and set them up on startup. Unfortunately sometimes the base counters are corrupt so the start can't proceed.
The error in the logs:
fatal exception during startup Cannot load Counter Name data because an invalid index '' was read from the registry. at System.Diagnostics.PerformanceCounterLib.GetStringTable(Boolean isHelp)?? ? at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)?? [...] at System.Threading.ThreadHelper.ThreadStart() 3facc7ff-8275-4847-b3b7-338d32c4d1c5 its the repo checking the counter which doesn't exist
Qlik Sense April 2019 will log information including instructions to resolve the issue. See the related Release Notes for ID QLIK-92800:
"Failed to initialize usage of Windows Performance Counters. Make sure that performance counters are enabled or try rebuilding them with "lodctr /R"
A workaround is provided in repairing the Windows Performance Counters:
Pitfalls:
If you get as a return:
Error: Unable to rebuild performance counter setting from system backup store, error code is 5
then your prompt was not elevated. Elevate the command prompt with administrator permissions.
Ensure that the counters are not disabled in the registry
The counters may be disabled via registry settings. Please check the following registry locations to ensure that the counters have not been disabled.
HKLM\System\CurrentControlSet\Services\%servicename%\Performance
%servicename% represents any service with a performance counter. For example: PerfDisk, PerfOS, etc.
There may be registry keys for "DisablePerformanceCounters" in any of these locations. As per the following TechNet article, this value should be set to 0. If the value is anything other than 0 the counter may be disabled.
Disable Performance Counters
http://technet.microsoft.com/en-us/library/cc784382.aspx
ref. https://support.microsoft.com/en-us/help/2554336/how-to-manually-rebuild-performance-counters-for-windows-server-2008-6
Once done, return to Option 1
After a successful upgrade to Qlik Sense Enterprise on Windows November 2024 patch 8, changing the MS SQL Server connection's password from the Data Load Editor (DLE) generates the following error:
Bad request 400
This error occurs when clicking 'SAVE connection', even after the connection has been successfully tested.
Changing the MS SQL Server connection's password from the Qlik Sense Enterprise Console (in the Data Connection view) works as expected.
This is a known defect (QCB-32467 | SUPPORT-4457) in Qlik Sense November 2024 and Qlik Qlik Sense May 2025.
Upgrade to:
QCB-32467 | SUPPORT-4457
Your company might need to migrate its users from an old Active Directory domain to a new one. Sometimes usernames will also be renamed.
In some cases, it won't be possible to use the QMC to perform the migration of a document permission, due to users having the same name in the old and new domain.
If documents are being distributed using the QlikView Publisher functionality, then the DistributionDetail.xml can be edited to have the new and old domain and user names replaced.
Prior to doing this, ensure that a QVPR backup exists.
RecipientName="domain1\user1"
RecipientName="domain2\user2"
This article describes the procedure for when QlikView Server is migrated to a new domain. In this scenario, the existing QlikView Server that will be moved to a new domain is a single server QlikView Server installation and has a static IP address.
What you need to take into account are permissions (Service Account, User access to files) and the name of the machine in case that changes as well. License assignments such as User CALs and Document CALs will need to be redone, as those will reference the previous domain name.
Changing the hostname of the QlikView Server requires a change of the references to the hostname for each service. See Migrate and restore your backup in the QlikView upgrade and migration section on our Help for details.
CALs will not automatically refer to the new domain\ prefix. You will need to manually re-assign them.
Refer to the Power Tools for QlikView and the User Management.
NOTE: The CALs will not be available for 7 days; no exceptions. Plan to perform the migration period during an appropriate date range. The only possible alternative to avoid the quarantine is to completely clear the license and then, after reapplying it, reassign all the CALs.
The QlikView Administrator will have to edit the domain\ prefix for all available objects.
The QlikView Shared File Cleanup tool can be used to change ownership of objects. See How to change Server Object Owner in QlikView using the inbuilt Cleanup Tool for details.
See How to migrate Active Directory Users in QlikView for details.
When replicating data from MySQL integration, users may encounter the following extraction error:
Fatal Error Occurred - Streaming result set com.mysql.cj.protocol.a.result.ResultsetRowsStreaming@xxxx is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.
2025-09-30 20:30:00,000Z tap - INFO [main] tap-hp-mysql.sync-strategies.common - Querying: SELECT `pk_col`, `col1`, `col2`, `col3` FROM `schema`.`table` WHERE ((`pk_col` > ? OR `pk_col` IS NULL)) AND ((`pk_col` <= ?)) ORDER BY `pk_col` (<last PK value checked>, <max PK value>)
2025-09-30 20:32:00,000Z tap - FATAL [main] tap-hp-mysql.main - Fatal Error Occurred - Streaming result set com.mysql.cj.protocol.a.result.ResultsetRowsStreaming@XXXX is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.
SELECT `pk_col`, `col1`, `col2`, `col3`
FROM `schema`.`table` WHERE (`pk_col` IS NULL OR `pk_col` > [last PK value checked]) AND `pk_col` <= [max PK value] ORDER BY `pk_col`;
SHOW FULL PROCESSLIST; SELECT ID, USER, HOST, DB, COMMAND, TIME, STATE, INFO
FROM information_schema.PROCESSLIST
WHERE STATE = 'Sending data';
If you are unable to alleviate the error following the above, please reach out to Qlik Support.
This error occurs when Stitch has an active server-side streaming ResultSet on a MySQL connection and tries to execute another statement on that same connection before the stream is fully consumed and closed. MySQL’s JDBC driver allows only one active statement per connection while a streaming result is open.
Potential Contributors
When using an Amazon S3 as a target in a Qlik Replicate task, the Full Load data are written to CSV, TEXT, or JSON files (depending on the endpoint settings). The Full Load Files are named using incremental counters e.g. LOAD00000001.csv, LOAD00000002.csv. This is the default behavior.
In some scenarios, you may want to use the table name as the file name rather than LOAD########.
This article describes how to rename the output files from LOAD######## to <schemaName>_<tableName>__######## format while Qlik Replicate running on a Windows platform.
In this article, we will focus on cloud types of target endpoint (ADLS, S3, etc...) The example uses Amazon S3 which locates remote cloud storage.
This customization is provided as is. Qlik Support cannot provide continued support for the solution. For assistance, reach out to Professional Services.
@Echo on
setx AWS_SHARED_CREDENTIALS_FILE C:\Users\demo\.aws\credentials
for %%a in (%1) do set "fn=%%~na"
echo %fn%
set sn=%fn:~4,8%
echo %sn%
aws s3 mv s3://%1 s3://qmi-bucket-1234567868c4deded132f4ca/APAC_Test/%2.%3/%2_%3__%sn%.csv
where C:\Users\demo\.aws\credentials is generated in above step 3. The values are obfuscated in the above sample.
General
Bucket name : qmi-bucket-1234567868c4deded132f4ca
Bucket region : US East (N. Virginia)
Access options : Key pair
Access key : DEMO~~~~~~~~~~~~UXEM
Secret key : demo~~~~~~~~~~~~ciYW7pugMTv/0DemoSQtfw1m
Target folder : /APAC_Test
Advanced
Post Upload Processing, choose "Run command after upload"
Command name : myrename_S3.bat
Working directory: leave blank
Parameters : ${FILENAME} ${TABLE_OWNER} ${TABLE_NAME}
7. Startup or Reload the Full Load ONLY task and verify the file output.
C:\Users\demo>>aws s3 ls s3://qmi-bucket-1234567868c4deded132f4ca/APAC_Test --recursive --human-readable --summarize
2023-08-14 11:20:36 0 Bytes APAC_Test/
2023-08-15 08:10:24 0 Bytes APAC_Test/SCOTT.KIT/
2023-08-15 08:10:28 9 Bytes APAC_Test/SCOTT.KIT/SCOTT_KIT__00000001.csv
2023-08-15 08:10:24 0 Bytes APAC_Test/SCOTT.KIT500K/
2023-08-15 08:10:34 4.0 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000001.csv
2023-08-15 08:10:44 4.0 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000002.csv
2023-08-15 08:10:54 4.0 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000003.csv
2023-08-15 08:11:05 4.0 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000004.csv
2023-08-15 08:11:15 4.0 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000005.csv
2023-08-15 08:11:24 2.7 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000006.csv
Total Objects: 10
Total Size: 22.7 MiB
Qlik Replicate
Amazon S3 target
Qlik Replicate and File target: How to rename output files LOAD######## to table name format on Wind...
Qlik Replicate and File target: How to rename output files LOAD######## to table name format on Linu...
QVS files (read more) cannot be uploaded to Managed Spaces in Qlik Cloud.
.qvs (QlikView Script) files cannot be directly uploaded to a managed space in Qlik Cloud. QlikView Script files are intended as reusable load script blocks and are not considered application files (such as .qvf and .qvw).
To use a .qvs file, copy the script's contents into an app's load script editor or use an $(Include=...) statement to reference the file, which needs to be stored elsewhere and made accessible to the app.
This capability has been rolled out across regions over time:
With the introduction of shared automations, it is now possible to create, run, and manage automations in shared spaces.
Limit the execution of an automation to specific users.
Every automation has an owner. When an automation runs, it will always run using the automation connections configured by the owner. Any Qlik connectors that are used will use the owner's Qlik account. This guarantees that the execution happens as the owner intended it to happen.
The user who created the run, along with the automation's owner at run time, are both logged in the automation run history.
These are five options on how to run an automation:
Collaborate on an automation through duplication.
Automations are used to orchestrate various tasks; from Qlik use cases like reload task chaining, app versioning, or tenant management, to action-oriented use cases like updating opportunities in your CRM, managing supply chain operations, or managing warehouse inventories.
To prevent users from editing these live automations, we're putting forward a collaborate through duplication approach. This makes it impossible for non-owners to change an automation that can negatively impact operations.
When a user duplicates an existing automation, they will become the owner of the duplicate. This means the new owner's Qlik account will be used for any Qlik connectors, so they must have sufficient permissions to access the resources used by the automation. They will also need permissions to use the automation connections required in any third-party blocks.
Automations can be duplicated through the context menu:
As it is not possible to display a preview of the automation blocks before duplication, please use the automation's description to provide a clear summary of the purpose of the automation:
The Automations Activity Centers have been expanded with information about the space in which an automation lives. The Run page now also tracks which user created a run.
Note: Triggered automation runs will be displayed as if the owner created them.
The Automations view in Administration Center now includes the Space field and filter.
The Runs view in Administration Center now includes the Executed by and Space at runtime fields and filters.
The Automations view in Automations Activity Center now includes Space field and filter.
Note: Users can configure which columns are displayed here.
The Runs view in the Automations Activity Center now includes the Space at runtime, Executed by, and Owner fields and filters.
In this view, you can see all runs from automations you own as well as runs executed by other users. You can also see runs of other users's automations where you are the executor.
To see the full details of an automation run, go to Run History through the automation's context menu. This is also accessible to non-owners with sufficient permissions in the space.
The run history view will show the automation's runs across users, and the user who created the run is indicated by the Executed by field.
The metrics tab in the automations activity center has been deprecated in favor of the automations usage app which gives a more detailed view of automation consumption.
Qlik is aware of some industry concerns around the use of the NPM library fast-glob. To address these concerns, Qlik is taking steps to remove this library from the Qlik Sense for Windows product. The removal is expected to be complete as of the November 2025 release.
A replication fails with the following:
[TARGET_APPLY ]I: ORA-03135: connection lost contact Process ID: 19637 Session ID: 1905 Serial number: 3972 [1022307] (oracle_endpoint_load.c:862)
[TARGET_APPLY ]I: Failed to truncate net changes table [1022307] (oracle_endpoint_bulk.c:1162)
[TARGET_APPLY ]I: Error executing command [1022307] (streamcomponent.c:1987)
[TASK_MANAGER ]I: Stream component failed at subtask 0, component st_0_PCA UAT DW Target [1022307] (subtask.c:1474)
[TARGET_APPLY ]I: Target component st_0_PCA UAT DW Target was detached because of recoverable error. Will try to reattach (subtask.c:1589)
[TARGET_APPLY ]E: Failed executing truncate table statement: TRUNCATE TABLE "PAYOR_DW"."attrep_changesBF9CC327_0000402" [1020403] (oracle_endpoint_load.c:856)
This may require additional review by your database admin.
In this instance, the issue was caused by a database-level trigger to monitor drop, truncate, and alter statements by name TSDBA.AUDIT_DDL_TRG, which is currently invalid.
To resolve, validate the trigger and add logic to not consider attrep_changes% tables, as this is just a temporary table for Qlik Replicate batch processing.