Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
After upgrading Qlik Sense Enterprise on to Windows May 2022 patch 11 or August 2022 patch 6, reload tasks may be listed as failed even if the script log completes successfully.
The engine logs (Engine\System Service_Engine_TIMESTAMP.log):
WARN QLIKSERVER XXXXX-b9df-48dc-a868-XXXX 20230130T151057.522+0100 12.1386.6.0 Command=Doc::DoSave;Result=409;ResultText=Warning: Conflict 0 0 1111411 QLK QLIKUSER XXXXX-47c2-4ea4-94c2-XXXXXX XXXXX-95f5-46e2-8b92-XXXXX ApplicationQLIK Engine Not available Doc::DoSave Doc::DoSave 409 Object write failed. XXXXXX-b9df-48dc-a868-XXXXX
The repository logs (Repository\Trace log called System_Engine_TIMESTAMP.log):
2072 20230129T193052.695+0100 ERROR QLIKSERVER System.Repository.Repository.Core.Repository.Common.TransactionUtility 185 XXXXX-d7f0-4983-8905-XXXX QLK\QLIKSERVICEUSER Error when committing The custom property value already assigned at
Qlik Sense Enterprise on Windows May 2022, August 2022
Warning: Modifying the Qlik Sense Repository manually is generally not supported by Qlik. Any modifications need to be done with extreme caution. Always back up your Qlik Sense database before committing changes.
This is caused by custom properties being duplicated and injected multiple times. The fix requires these duplicates to be removed.
SELECT "ID", "Definition_ID", "Value", "App_ID" FROM
( SELECT "ID", "Definition_ID", "Value", "App_ID",
ROW_NUMBER() OVER ( PARTITION BY "Value", "Definition_ID", "App_ID"
ORDER BY "App_ID" DESC, "ID"
) rn
FROM "CustomPropertyValues"
) t1 WHERE rn > 1 AND "App_ID" IS NOT null;DELETE FROM "CustomPropertyValues"
WHERE "ID" IN
( SELECT "ID" FROM
( SELECT "ID", "Definition_ID", "Value", "App_ID",
ROW_NUMBER() OVER ( PARTITION BY "Value", "Definition_ID", "App_ID"
ORDER BY "App_ID" DESC, "ID"
) rn
FROM "CustomPropertyValues"
) t1 WHERE rn > 1 AND "App_ID" IS NOT null);This is caused by a fix QB-9058, which has been introduced in May 2022 Patch 11 and August patch 6:
Qlik Sense: Possible to apply same custom property value more than once to a single app:
Fixed an issue where it was possible via QMC or API request to apply the same custom property value belonging to the same custom property definition more than once to the same Qlik Sense app/qvf.
So from those patch it is no longer possible to have any application with a duplicate custom property.
Duplicate properties will be shown as errors in the Qlik Sense log:
The custom property value already assigned
The diagnosis script above will also expose duplicate custom properties
Google Ads integration encounters the extraction error:
tap - CRITICAL (<_InactiveRpcError of RPC that terminated with:
tap - CRITICAL status = StatusCode.PERMISSION_DENIED
tap - CRITICAL details = "The caller does not have permission"
tap - CRITICAL debug_error_string = "UNKNOWN:Error received from peer ipv4:172.253.115.95:443 {grpc_message:"The caller does not have permission", grpc_status:7}"
tap - CRITICAL >
tap - CRITICAL errors {
tap - CRITICAL error_code {
tap - CRITICAL authorization_error: USER_PERMISSION_DENIED
tap - CRITICAL }
tap - CRITICAL message: "User doesn't have permission to access customer. Note: If you're accessing a client customer, the manager's customer id must be set in the 'login-customer-id' header. See https://developers.google.com/google-ads/api/docs/concepts/call-structure#cid"
tap - CRITICAL }
tap - CRITICAL request_id: "xxxxxxxxxxxxxx"
Sign in to the Google Ads UI and ensure that:
You have access to the customer account ID you’re trying to query.
If it’s a client account, it must be linked to a manager (MCC) account that has API access.
If you see that the customer account is cancelled or inactive, reactivate it by following Google’s guide:
Reactivate a cancelled Google Ads account | support.google.com
If the issue persists, reach out to Google Ads API support with:
The error snippet
The request_id from your extraction logs (used by Google to trace the failed call)
Re-authorize the Google Ads integration:
Open Stitch in an incognito browser window.
Go to the Google Ads integration settings.
Click Re-authorize and follow the OAuth flow.
After re-authorizing, navigate to the Extractions tab and click Run Extraction Now.
If you manage multiple Google Ads accounts, note that:
Some accounts may work while others fail if they’re not connected to a manager account.
Only Ads accounts linked to a manager (MCC) have Ads API access.
Regular advertiser accounts must be linked to a manager account for Stitch to extract data successfully.
Prevention Tips
Periodically verify that the connected Google Ads account is linked to a manager account and the OAuth token has not expired.
Check for account status (ENABLED, CANCELLED, etc.) using the CustomerStatus enum | developers.google.com if you suspect deactivation.
Document the manager–client hierarchy for clarity when managing multiple accounts.
The error message indicates that the Google Ads API denied permission for the request. This is a raw authorization error returned by Google Ads, specifically:
USER_PERMISSION_DENIED
The user or OAuth credentials being used don’t have permission to access the target Ads customer account.
If you’re accessing a client (managed) account, the manager account ID must be provided in the login-customer-id header.
See Google’s reference documentation in Authorizationerror | developers.google.com.
This error occurs with the Google Cloud SQL PostgreSQL database integration and it displays as below in the extraction logs:
Fatal Error Occured - ERROR: temporary file size exceeds temp_file_limit
To resolve this issue, you need to increase the temp_file_limit parameter in your PostgreSQL configuration
Here are the steps to fix it:
Access your Google Cloud SQL instance settings.
Locate the database flags or parameters section.
Find the temp_file_limit flag and increase its value.
The value is specified in kilobytes (kB).
The default in PostgreSQL is -1, which means no limit. However, Cloud SQL may enforce a smaller custom value depending on your instance configuration.
If you’re unsure about the appropriate value, start by doubling the current limit and adjust as needed based on your workload. Increasing this limit allows larger queries to complete but may also increase storage usage, so monitor performance and disk space after making the change.
Save the changes.
Updating database flags in Cloud SQL typically requires a restart of the instance for the new settings to take effect.
After modifying the temp_file_limit, restart your PostgreSQL instance (if required) and run an extraction in Stitch.
The error message indicates that the temporary file size has exceeded the temp_file_limit in your Google Cloud SQL PostgreSQL database. This limit is set to control the maximum size of temporary files used during query execution.
See Google’s documentation on configuring database flags here:
Configure database flags | cloud.google.com
Automations using Qlik Reporting Service blocks fail with the following:
status:403
forbidden request
Details in the error status messages indicate:
user does not have permissions to export the report
If the automation has previously run successfully and no action has been taken by the Tenant Administrator to use the Qlik Reporting Service content roles:
If the Tenant Administrator has started to apply custom role control:
The introduction of custom roles supporting the Qlik Reporting Service requires the automation owner executing an automation calling the following Qlik Reporting Service Blocks to have the appropriate platform and space roles that govern report execution from a specific app.
See Permissions | help.qlik.com for details.
The blocks:
The Qlik Reporting Service is a value-add service in which customers have long requested the ability to control who can use the services and produce (metered) reports.
The release of custom roles to support Qlik Reporting Service brings consistency to all use cases, ensuring that users executing the report have the appropriate tenant role and space permission to the application to know what report data is being produced.
Does a Qlik Sense pivot table have a hard limit on its dimensions and measures?
Qlik Sense pivot tables have a limit of 1000 measures and 1000 dimensions.
Approaching this limit is not recommended. Managing measures and dimensions of this volume will become difficult and impractical.
Integration fails with the following error:
tap - CRITICAL 'search_prefix'
tap - Traceback (most recent call last):
tap - File "/code/tap-env/bin/tap-s3-csv", line 10, in <module>
tap - sys.exit(main())
tap - ^^^^^^
tap - File "/code/tap-env/lib/python3.12/site-packages/singer/utils.py", line 235, in wrapped
tap - return fnc(*args, **kwargs)
tap - ^^^^^^^^^^^^^^^^^^^^
tap - File "/code/tap-env/lib/python3.12/site-packages/tap_s3_csv/__init__.py", line 81, in main
tap - config['tables'] = validate_table_config(config)
tap - ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
tap - File "/code/tap-env/lib/python3.12/site-packages/tap_s3_csv/__init__.py", line 63, in validate_table_config
tap - table_config.pop('search_prefix')
tap - KeyError: 'search_prefix'
main - INFO Tap exited abnormally with status 1
main - INFO No tunnel subprocess to tear down
main - INFO Exit status is: Discovery failed with code 1 and error message: "'search_prefix'".
Improving the integration to gracefully handle the missing key when an update to the connection/config occurs is currently on the roadmap. The R&D team is working on this behavior and a minor version upgrade is expected down the line; however, there is currently no ETA.
If you encounter this error with your AWS S3 CSV integration, please reach out to Qlik Support for further assistance.
The issue occurs due to missing keys in the configuration. Specifically, an update to the connection settings removed or modified the search_prefix, resulting in the key being absent from the config expected by the integration.
Google Ads extractions fail with:
tap - CRITICAL 504 Deadline Exceeded
tap - grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
tap - status = StatusCode.DEADLINE_EXCEEDED
tap - details = "Deadline Exceeded"
tap - debug_error_string = "UNKNOWN:Error received from peer {grpc_status:4, grpc_message:"Deadline Exceeded"}"
tap - >
main - INFO Exit status is: Discovery succeeded. Tap failed with code 1 and error message: "504 Deadline Exceeded". Target succeeded.
This error is often transient and will not require any action to alleviate. If the error is persistent, review your Tables to Replicate and consider de-selecting unneeded tables or columns. If the issue remains, please reach out to Qlik Support to discuss your use-case and further review the integration settings.
The error "504 Deadline Exceeded" indicates that the extraction timed out due to a lack of response from the Google Ads API. By default, Stitch allows up to 15 minutes for a response before terminating the request.
Possible reasons the Google Ads API may exceed this threshold include:
Stitch Support frequently receives questions regarding the invocation of the PUBLIC role with Snowflake. When setting up a database user following Create a Stitch database and database user (Qlik Stitch Documentation), users will notice that the Stitch user executes GRANT statements on the PUBLIC role. This behavior can raise questions about role-based access and security implications within Snowflake.
Manually adjust permissions in Snowflake as needed. If you prefer Stitch offers a more streamlined approach in its behavior, please submit a feature request. Refer to New Process for Submitting a Feature Request for All Talend Customers and Partners on how to submit a feature request.
By default, Stitch grants the PUBLIC role access to schemas and objects it creates in Snowflake. This behavior often raises questions from users who are concerned about broad access permissions.
The reason Stitch does this is because it cannot assume which specific roles or users in your organization should have access to the data. Granting access to the PUBLIC role ensures that Stitch can write data successfully without making assumptions about your internal role structure.
This default behavior is not a requirement from Snowflake itself, but rather a design decision by Stitch to simplify initial setup and avoid permission-related sync failures.
If this approach does not align with your organization’s security policies, you may manually revoke access from the PUBLIC role after the initial sync. However, this step must be repeated each time a new integration runs or a new schema is created, which may not be scalable.
Snowflake supports granular permission control via the REVOKE command, allowing you to adjust access as needed:
🔗 REVOKE <privileges> … FROM ROLE | docs.snowflake.com
While this manual revocation process works, it requires ongoing attention. If tighter access control is a priority and manual intervention isn’t feasible, you may want to consider alternative destinations or workflows.
Qlik Sense Repository service will log performance counters, and set them up on startup. Unfortunately sometimes the base counters are corrupt so the start can't proceed.
The error in the logs:
fatal exception during startup Cannot load Counter Name data because an invalid index '' was read from the registry. at System.Diagnostics.PerformanceCounterLib.GetStringTable(Boolean isHelp)?? ? at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)?? [...] at System.Threading.ThreadHelper.ThreadStart() 3facc7ff-8275-4847-b3b7-338d32c4d1c5 its the repo checking the counter which doesn't exist
Qlik Sense April 2019 will log information including instructions to resolve the issue. See the related Release Notes for ID QLIK-92800:
"Failed to initialize usage of Windows Performance Counters. Make sure that performance counters are enabled or try rebuilding them with "lodctr /R"
A workaround is provided in repairing the Windows Performance Counters:
Pitfalls:
If you get as a return:
Error: Unable to rebuild performance counter setting from system backup store, error code is 5
then your prompt was not elevated. Elevate the command prompt with administrator permissions.
Ensure that the counters are not disabled in the registry
The counters may be disabled via registry settings. Please check the following registry locations to ensure that the counters have not been disabled.
HKLM\System\CurrentControlSet\Services\%servicename%\Performance
%servicename% represents any service with a performance counter. For example: PerfDisk, PerfOS, etc.
There may be registry keys for "DisablePerformanceCounters" in any of these locations. As per the following TechNet article, this value should be set to 0. If the value is anything other than 0 the counter may be disabled.
Disable Performance Counters
http://technet.microsoft.com/en-us/library/cc784382.aspx
ref. https://support.microsoft.com/en-us/help/2554336/how-to-manually-rebuild-performance-counters-for-windows-server-2008-6
Once done, return to Option 1
Your company might need to migrate its users from an old Active Directory domain to a new one. Sometimes usernames will also be renamed.
In some cases, it won't be possible to use the QMC to perform the migration of a document permission, due to users having the same name in the old and new domain.
If documents are being distributed using the QlikView Publisher functionality, then the DistributionDetail.xml can be edited to have the new and old domain and user names replaced.
Prior to doing this, ensure that a QVPR backup exists.
RecipientName="domain1\user1"
RecipientName="domain2\user2"This article describes the procedure for when QlikView Server is migrated to a new domain. In this scenario, the existing QlikView Server that will be moved to a new domain is a single server QlikView Server installation and has a static IP address.
What you need to take into account are permissions (Service Account, User access to files) and the name of the machine in case that changes as well. License assignments such as User CALs and Document CALs will need to be redone, as those will reference the previous domain name.
Changing the hostname of the QlikView Server requires a change of the references to the hostname for each service. See Migrate and restore your backup in the QlikView upgrade and migration section on our Help for details.
CALs will not automatically refer to the new domain\ prefix. You will need to manually re-assign them.
Refer to the Power Tools for QlikView and the User Management.
NOTE: The CALs will not be available for 7 days; no exceptions. Plan to perform the migration period during an appropriate date range. The only possible alternative to avoid the quarantine is to completely clear the license and then, after reapplying it, reassign all the CALs.
The QlikView Administrator will have to edit the domain\ prefix for all available objects.
The QlikView Shared File Cleanup tool can be used to change ownership of objects. See How to change Server Object Owner in QlikView using the inbuilt Cleanup Tool for details.
See How to migrate Active Directory Users in QlikView for details.
When replicating data from MySQL integration, users may encounter the following extraction error:
Fatal Error Occurred - Streaming result set com.mysql.cj.protocol.a.result.ResultsetRowsStreaming@xxxx is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.
2025-09-30 20:30:00,000Z tap - INFO [main] tap-hp-mysql.sync-strategies.common - Querying: SELECT `pk_col`, `col1`, `col2`, `col3` FROM `schema`.`table` WHERE ((`pk_col` > ? OR `pk_col` IS NULL)) AND ((`pk_col` <= ?)) ORDER BY `pk_col` (<last PK value checked>, <max PK value>)
2025-09-30 20:32:00,000Z tap - FATAL [main] tap-hp-mysql.main - Fatal Error Occurred - Streaming result set com.mysql.cj.protocol.a.result.ResultsetRowsStreaming@XXXX is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.
SELECT `pk_col`, `col1`, `col2`, `col3`
FROM `schema`.`table` WHERE (`pk_col` IS NULL OR `pk_col` > [last PK value checked]) AND `pk_col` <= [max PK value] ORDER BY `pk_col`;
SHOW FULL PROCESSLIST; SELECT ID, USER, HOST, DB, COMMAND, TIME, STATE, INFO
FROM information_schema.PROCESSLIST
WHERE STATE = 'Sending data';
If you are unable to alleviate the error following the above, please reach out to Qlik Support.
This error occurs when Stitch has an active server-side streaming ResultSet on a MySQL connection and tries to execute another statement on that same connection before the stream is fully consumed and closed. MySQL’s JDBC driver allows only one active statement per connection while a streaming result is open.
Potential Contributors
When using an Amazon S3 as a target in a Qlik Replicate task, the Full Load data are written to CSV, TEXT, or JSON files (depending on the endpoint settings). The Full Load Files are named using incremental counters e.g. LOAD00000001.csv, LOAD00000002.csv. This is the default behavior.
In some scenarios, you may want to use the table name as the file name rather than LOAD########.
This article describes how to rename the output files from LOAD######## to <schemaName>_<tableName>__######## format while Qlik Replicate running on a Windows platform.
In this article, we will focus on cloud types of target endpoint (ADLS, S3, etc...) The example uses Amazon S3 which locates remote cloud storage.
This customization is provided as is. Qlik Support cannot provide continued support for the solution. For assistance, reach out to Professional Services.
@Echo on
setx AWS_SHARED_CREDENTIALS_FILE C:\Users\demo\.aws\credentials
for %%a in (%1) do set "fn=%%~na"
echo %fn%
set sn=%fn:~4,8%
echo %sn%
aws s3 mv s3://%1 s3://qmi-bucket-1234567868c4deded132f4ca/APAC_Test/%2.%3/%2_%3__%sn%.csv
where C:\Users\demo\.aws\credentials is generated in above step 3. The values are obfuscated in the above sample.
General
Bucket name : qmi-bucket-1234567868c4deded132f4ca
Bucket region : US East (N. Virginia)
Access options : Key pair
Access key : DEMO~~~~~~~~~~~~UXEM
Secret key : demo~~~~~~~~~~~~ciYW7pugMTv/0DemoSQtfw1m
Target folder : /APAC_Test
Advanced
Post Upload Processing, choose "Run command after upload"
Command name : myrename_S3.bat
Working directory: leave blank
Parameters : ${FILENAME} ${TABLE_OWNER} ${TABLE_NAME}
7. Startup or Reload the Full Load ONLY task and verify the file output.
C:\Users\demo>>aws s3 ls s3://qmi-bucket-1234567868c4deded132f4ca/APAC_Test --recursive --human-readable --summarize
2023-08-14 11:20:36 0 Bytes APAC_Test/
2023-08-15 08:10:24 0 Bytes APAC_Test/SCOTT.KIT/
2023-08-15 08:10:28 9 Bytes APAC_Test/SCOTT.KIT/SCOTT_KIT__00000001.csv
2023-08-15 08:10:24 0 Bytes APAC_Test/SCOTT.KIT500K/
2023-08-15 08:10:34 4.0 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000001.csv
2023-08-15 08:10:44 4.0 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000002.csv
2023-08-15 08:10:54 4.0 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000003.csv
2023-08-15 08:11:05 4.0 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000004.csv
2023-08-15 08:11:15 4.0 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000005.csv
2023-08-15 08:11:24 2.7 MiB APAC_Test/SCOTT.KIT500K/SCOTT_KIT500K__00000006.csv
Total Objects: 10
Total Size: 22.7 MiB
Qlik Replicate
Amazon S3 target
Qlik Replicate and File target: How to rename output files LOAD######## to table name format on Wind...
Qlik Replicate and File target: How to rename output files LOAD######## to table name format on Linu...
QVS files (read more) cannot be uploaded to Managed Spaces in Qlik Cloud.
.qvs (QlikView Script) files cannot be directly uploaded to a managed space in Qlik Cloud. QlikView Script files are intended as reusable load script blocks and are not considered application files (such as .qvf and .qvw).
To use a .qvs file, copy the script's contents into an app's load script editor or use an $(Include=...) statement to reference the file, which needs to be stored elsewhere and made accessible to the app.
Qlik is aware of some industry concerns around the use of the NPM library fast-glob. To address these concerns, Qlik is taking steps to remove this library from the Qlik Sense for Windows product. The removal is expected to be complete as of the November 2025 release.
Tables from your integration are not being loaded to Snowflake. The loading error is:
Cannot perform CREATE FILE FORMAT. This session does not have a current schema. Call 'USE SCHEMA', or use a qualified name.
GRANT ALL ON WAREHOUSE <stitch_warehouse> TO ROLE <stitch_role>;
GRANT ALL ON DATABASE <stitch_database> TO ROLE <stitch_role>;ALTER USER SET DEFAULT_ROLE MY_ROLE;
ALTER USER SET DEFAULT_ROLE STITCH;The root cause of the issue is likely related to permissions or role settings for the Stitch user in Snowflake. If you run a destination connection check in the Stitch User Interface for your Snowflake connection and it is successful, but your loads fail, then the error boils down to a permissions issue with loading the data.
To investigate Task failure, It is necessary to collect the Diagnostics Package from Qlik Cloud Data Integration.
Option Two: Monitor view within the task
Often, Support will request that specific logging components be increased to Verbose or Trace in order to effectively troubleshoot. To modify, click on the "Logging options" located in the right-hand corner of the logs view. The options presented in the UI do not use the same terminology as what you see in the logs themselves. For better understanding, please refer to this mapping:
| UI | Logs |
| Source - full load | SOURCE_UNLOAD |
| Source - CDC | SOURCE_CAPTURE |
| Source - data | SOURCE_UNLOAD SOURCE_CAPTURE SOURCE_LOG_DUMP DATA_RECORD |
| Target - full load | TARGET_LOAD |
| Target - CDC | TARGET_APPLY |
| Target - Upload | FILE_FACTORY |
| Extended CDC | SORTER SORTER_STORAGE |
| Performance | PERFORMANCE |
| Metadata | SERVER TABLES_MANAGER METADATA_MANAGER METADATA_CHANGES |
| Infrastructure | IO INFRASTRUCTURE STREAM STREAM_COMPONENT TASK_MANAGER |
| Transformation | TRANSFORMATION |
Please note that if the View task logs option is not present in the dropdown menu, it indicates that the type of task you are working with does not have available task logs. In the current design, only Replication and Landing tasks have task logs.
This article aims to answer the following questions:
Stitch is a cloud-based ETL platform, which means it is not real-time and may experience latency due to the nature of cloud infrastructure and its step-based processing model.
Stitch’s replication process consists of three independent steps:
Extraction → Preparation → Loading
Each step takes time to complete and is influenced by various factors.
For more information, see: Stitch’s Replication Process | stitchdata.com
The speed and efficiency of Stitch’s replication process can be affected by:
These factors can vary over time and across integrations, which is why replication durations are not always predictable.
The replication frequency determines how often Stitch initiates a new extraction job (when one isn’t already in progress). Stitch tracks your tables and updates them based on the replication method you’ve selected.
However, this frequency does not guarantee that data will be prepared and loaded within the same time window. For example, a 30-minute frequency does not mean the full replication cycle completes in 30 minutes.
Stitch extracts one table at a time per integration (sequentially). It must finish extracting one table before moving to the next.
Once data is extracted, Stitch begins the preparation phase, which involves cutting records into rectangular staging files. This step is batch-based and starts as soon as data is returned from the source. The duration of this phase depends on the structure and volume of the data.
Stitch can load up to 5 tables concurrently per destination. If 5 tables are already loading, others must wait until a slot becomes available. For example, with 10 integrations and 20 tables each, Stitch will load 5 tables at a time per destination.
Stitch’s loading systems check every 15–20 minutes for batches of records that are fully prepared and ready to be loaded into your destination.
What may appear as missing data is often just incomplete processing. Most data discrepancies resolve themselves once Stitch finishes processing.
In a setup with a local and remote Qlik Replicate server, the remote server's IP address has changed. Does this IP address change require any additional configuration steps?
To avoid any issues after an IP address change:
Is it possible to limit the available output formats for an OnDemand report, such as only allowing PDF rather than allowing multiple formats?
Qlik NPrinting On-Demand Reports cannot be limited to a certain format (such as PDF). The Qlik Sense On-Demand reporting object will continue to present all available formats to export On-Demand reports to. This is by design.