Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
After upgrading to Qlik Talend Cloud Enterprise Edition R2025-08, some users reported that the tClaudeAIClient component was missing from the Talend Studio Palette.
Despite attempts to search for the component in the Palette or import it manually, they were unsuccessful.
To restore the tClaudeAIClient component in Talend Studio, follow the steps below:
After restarting, verify that the tClaudeAIClient component is available in the Palette under the AI family.
The tClaudeAIClient component is a member of the AI family, which is offered through the EmbeddingAI optional feature. However, this feature may not be automatically installed or enabled by default, as it depends on the user's Studio configuration and feature synchronization settings.
The EmbeddingAI package includes additional AI-related components beyond tClaudeAIClient.
If the component still does not appear after installation, ensure your Studio is synchronized with your Qlik Talend Cloud license and feature repositories.
For enterprise environments with restricted update policies, check with your Talend administrator to confirm access to optional feature downloads.
Qlik Talend Cloud Enterprise Edition R2025-08 and later
Talend Studio (Cloud or Local Installation)
To troubleshoot an issue, it may be necessary to enable ODBC trace logging for Linux Servers.
To enable tracing:
To disable tracing:
Once you are finished, repeat the steps and set Trace=Yes to Trace=No in the odbcinst.ini file.
[ODBC Driver 18 for SQL Server]
Description=Microsoft ODBC Driver 18 for SQL Server
Driver=/opt/microsoft/msodbcsql18/lib64/libmsodbcsql-18.1.so.2.1
UsageCount=1
[ODBC]
Trace=Yes
TraceFile=/odbctrace/odbctrace.log
TraceOptions=3
The Service Authentication method is chosen during the initial QlikView installation procedure.
Two options are available:
See Installing QlikView Server for more information.
To verify the currently in use Service Authentication method:
Qlik Sense Enterprise on Windows offers the ability to preload apps. See Creating preload tasks | help.qlik.com for details. This feature was introduced in the May 2024 release.
The app is automatically unloaded after the combined duration of the App cache time (engine setting) and Time to live (preload task). The timer starts after the app has last been used.
To set or verify App cache time:
To set or verify Time to live:
In the provided example, App cache time is set to 30 minutes and Time to live to 10 minutes. The app will unload after not being in use for 40 minutes.
You may encounter an error : 400 - Invalid SNI when calling Talend Runtime API (Job as service) after installed 2025-02 patch or later. In the past before the patch version R2025-02 of Talend Runtime Server, it did work well when using the same certificate for SSL connection with Talend Runtime Server and did not cause any issue.
The SNI validation is active after 2025-02 patch or later.
There are three options to slove this issue
Disable SNI Host Check
This has the same security risk as jetty before it was updated (low security)
In <RuntimeInstallationFolder>/etc/org.ops4j.pax.web.cfg file, please add
jetty.ssl.sniRequired=false
and
jetty.ssl.sniHostCheck=false
Or configuring these jetty parameters in <RuntimeInstallationFolder>/etc/jetty.xml or jetty-ssl.xml file
<New id="sslHttpConfig" class="org.eclipse.jetty.server.HttpConfiguration">
<Arg><Ref refid="httpConfig"/></Arg>
<Call name="addCustomizer">
<Arg>
<New class="org.eclipse.jetty.server.SecureRequestCustomizer">
<Arg name="sniRequired" type="boolean">
<Property name="jetty.ssl.sniRequired" default="false"/>
</Arg>
<Arg name="sniHostCheck" type="boolean">
<Property name="jetty.ssl.sniHostCheck" default="false"/>
</Arg>
<Arg name="stsMaxAgeSeconds" type="int">
<Property name="jetty.ssl.stsMaxAgeSeconds" default="-1"/>
</Arg>
<Arg name="stsIncludeSubdomains" type="boolean">
<Property name="jetty.ssl.stsIncludeSubdomains" default="false"/>
</Arg>
</New>
</Arg>
</Call>
</New>
Resolve IP to Hostname
If the certification includes the domain name, you should use that domain name instead of the IP with the Jetty security updates in Talend Runtime Server.
But if your DNS server does not resolve the IP, you must call it by the IP address, so please check it at first to see if the workaround is feasible for your current situation.
In the examples the hostname is unresolvedhost.net and the IP is 10.20.30.40.
Try this API call at the command line:
curl -k -X GET --resolve unresolvedhost.net:9001:10.20.30.40 https://unresolvedhost.net:9001/services/
or
curl -k -X GET -H "Host: unresolvedhost.net" https://10.20.30.20:9001/services/
If this works, in your Talend component that makes the API call, go to "Advanced settings" or "Headers" table, add a row with Key: Host and Value: The hostname that matches your SSL certificate (e.g. unresolvedhost.net)
This will instruct Talend to send the correct Host header, which most HTTP clients (including Java's HttpClient) will also use as the SNI value during the TLS handshake.
The SNI enforcement is there for a security reason. With the 2025-02 patch, the Jetty components on Talend Runtime Server resolved a CVE security issue where they allowed a hostname to connect to a server that doesn't match the hostname in the server's TLS certificate.
Certificates require the URI not to be localhost or an IP address, and to have at least one dot, so a fully qualified domain name is best.
Due to a recent incident in the US-EAST-1 region, customer tenants may report inaccurate data movement capacities.
In certain conditions, the capacity for data movement tasks (Landing, Replication, and Lake Landing tasks) has been overreported. As a result, Qlik will not charge affected customers overage for the month of October 2025.
The incident only concerns the capacity metric (data moved) used for consumption and billing, and does not affect the operation of data movement or any other tasks.
No action is required by you to resolve the incident. Qlik is developing a fix to resolve telemetry data collection that will roll out in the coming weeks. The fix will resolve data movement capacity capture moving forward; however, historical telemetry data will not be adjusted.
During deployment of the fix, users may see data movement tasks briefly show as 'recoverable error' in the monitoring interfaces. This is a standard monitoring assertion built into Qlik Cloud that serves as a warning and does not impact the pipelines running in your gateway.
For any questions or concerns, please contact Qlik Support or your account representative
This article describes how to handle dynamic tables in Qlik Replicate, such as when source tables are frequently added and deleted. This avoids having to manually add the table to a task each time.
The methods described will work for most source endpoints.
Our example uses Oracle as the source.
Create a new table in the source:
create table SYSTEM.suri4(id int primary key, anane varchar(10))
The client Secret for a Single Sign-On Solution has expired.
After successfully logging in with a recovery address (https:// YOUR-TENEANT .eu.qlikcloud.com/login/recover) using the Service Account Owner (SAO) account, logging in to the Qlik Cloud Administration Console fails with:
User allocation required You do not have a valid user allocation. Please contact an administrator for more information
Once you successfully log in to the tenant via recovery link using the SAO (Service Account Owner) credentials, either navigate to the Qlik Cloud Administration Console by using
This will allow you to access the Qlik Cloud Administration Console and update the client secret for your IDP settings.
Generating an API key fails with:
An error occurred when generating the API Key. Please try again.
The user does not have a role with the required permission.
The old Developer role will be deprecated. Follow the steps in this article to future proof your system and create a custom role.
Is it possible to upgrade windows server OS on the same machine where Qlik Sense, QlikView, or Qlik NPrinting are installed?
How will Windows Update and Windows Service packs affect Qlik Products?
Questions often arise when upgrading or applying Windows Service packs or running Windows Update, e.g. "Will applying service packs or patches from Microsoft affect installed Qlik software or clients?"
Typically upgrading Windows or applying patches or installing Window Service packs should not affect any installed Qlik products. As a precaution to prevent unexpected effects, below practices are recommended.
General best practices to prepare for updates include:
If you have general questions regarding compatibility of host operating systems, please review the release notes for your release. If you have questions regarding specific patches, raise this query directly in the relevant Qlik product forums.
After upgrading Qlik Sense Enterprise on to Windows May 2022 patch 11 or August 2022 patch 6, reload tasks may be listed as failed even if the script log completes successfully.
The engine logs (Engine\System Service_Engine_TIMESTAMP.log):
WARN QLIKSERVER XXXXX-b9df-48dc-a868-XXXX 20230130T151057.522+0100 12.1386.6.0 Command=Doc::DoSave;Result=409;ResultText=Warning: Conflict 0 0 1111411 QLK QLIKUSER XXXXX-47c2-4ea4-94c2-XXXXXX XXXXX-95f5-46e2-8b92-XXXXX ApplicationQLIK Engine Not available Doc::DoSave Doc::DoSave 409 Object write failed. XXXXXX-b9df-48dc-a868-XXXXX
The repository logs (Repository\Trace log called System_Engine_TIMESTAMP.log):
2072 20230129T193052.695+0100 ERROR QLIKSERVER System.Repository.Repository.Core.Repository.Common.TransactionUtility 185 XXXXX-d7f0-4983-8905-XXXX QLK\QLIKSERVICEUSER Error when committing The custom property value already assigned at
Qlik Sense Enterprise on Windows May 2022, August 2022
Warning: Modifying the Qlik Sense Repository manually is generally not supported by Qlik. Any modifications need to be done with extreme caution. Always back up your Qlik Sense database before committing changes.
This is caused by custom properties being duplicated and injected multiple times. The fix requires these duplicates to be removed.
SELECT "ID", "Definition_ID", "Value", "App_ID" FROM
( SELECT "ID", "Definition_ID", "Value", "App_ID",
ROW_NUMBER() OVER ( PARTITION BY "Value", "Definition_ID", "App_ID"
ORDER BY "App_ID" DESC, "ID"
) rn
FROM "CustomPropertyValues"
) t1 WHERE rn > 1 AND "App_ID" IS NOT null;DELETE FROM "CustomPropertyValues"
WHERE "ID" IN
( SELECT "ID" FROM
( SELECT "ID", "Definition_ID", "Value", "App_ID",
ROW_NUMBER() OVER ( PARTITION BY "Value", "Definition_ID", "App_ID"
ORDER BY "App_ID" DESC, "ID"
) rn
FROM "CustomPropertyValues"
) t1 WHERE rn > 1 AND "App_ID" IS NOT null);This is caused by a fix QB-9058, which has been introduced in May 2022 Patch 11 and August patch 6:
Qlik Sense: Possible to apply same custom property value more than once to a single app:
Fixed an issue where it was possible via QMC or API request to apply the same custom property value belonging to the same custom property definition more than once to the same Qlik Sense app/qvf.
So from those patch it is no longer possible to have any application with a duplicate custom property.
Duplicate properties will be shown as errors in the Qlik Sense log:
The custom property value already assigned
The diagnosis script above will also expose duplicate custom properties
Google Ads integration encounters the extraction error:
tap - CRITICAL (<_InactiveRpcError of RPC that terminated with:
tap - CRITICAL status = StatusCode.PERMISSION_DENIED
tap - CRITICAL details = "The caller does not have permission"
tap - CRITICAL debug_error_string = "UNKNOWN:Error received from peer ipv4:172.253.115.95:443 {grpc_message:"The caller does not have permission", grpc_status:7}"
tap - CRITICAL >
tap - CRITICAL errors {
tap - CRITICAL error_code {
tap - CRITICAL authorization_error: USER_PERMISSION_DENIED
tap - CRITICAL }
tap - CRITICAL message: "User doesn't have permission to access customer. Note: If you're accessing a client customer, the manager's customer id must be set in the 'login-customer-id' header. See https://developers.google.com/google-ads/api/docs/concepts/call-structure#cid"
tap - CRITICAL }
tap - CRITICAL request_id: "xxxxxxxxxxxxxx"
Sign in to the Google Ads UI and ensure that:
You have access to the customer account ID you’re trying to query.
If it’s a client account, it must be linked to a manager (MCC) account that has API access.
If you see that the customer account is cancelled or inactive, reactivate it by following Google’s guide:
Reactivate a cancelled Google Ads account | support.google.com
If the issue persists, reach out to Google Ads API support with:
The error snippet
The request_id from your extraction logs (used by Google to trace the failed call)
Re-authorize the Google Ads integration:
Open Stitch in an incognito browser window.
Go to the Google Ads integration settings.
Click Re-authorize and follow the OAuth flow.
After re-authorizing, navigate to the Extractions tab and click Run Extraction Now.
If you manage multiple Google Ads accounts, note that:
Some accounts may work while others fail if they’re not connected to a manager account.
Only Ads accounts linked to a manager (MCC) have Ads API access.
Regular advertiser accounts must be linked to a manager account for Stitch to extract data successfully.
Prevention Tips
Periodically verify that the connected Google Ads account is linked to a manager account and the OAuth token has not expired.
Check for account status (ENABLED, CANCELLED, etc.) using the CustomerStatus enum | developers.google.com if you suspect deactivation.
Document the manager–client hierarchy for clarity when managing multiple accounts.
The error message indicates that the Google Ads API denied permission for the request. This is a raw authorization error returned by Google Ads, specifically:
USER_PERMISSION_DENIED
The user or OAuth credentials being used don’t have permission to access the target Ads customer account.
If you’re accessing a client (managed) account, the manager account ID must be provided in the login-customer-id header.
See Google’s reference documentation in Authorizationerror | developers.google.com.
This error occurs with the Google Cloud SQL PostgreSQL database integration and it displays as below in the extraction logs:
Fatal Error Occured - ERROR: temporary file size exceeds temp_file_limit
To resolve this issue, you need to increase the temp_file_limit parameter in your PostgreSQL configuration
Here are the steps to fix it:
Access your Google Cloud SQL instance settings.
Locate the database flags or parameters section.
Find the temp_file_limit flag and increase its value.
The value is specified in kilobytes (kB).
The default in PostgreSQL is -1, which means no limit. However, Cloud SQL may enforce a smaller custom value depending on your instance configuration.
If you’re unsure about the appropriate value, start by doubling the current limit and adjust as needed based on your workload. Increasing this limit allows larger queries to complete but may also increase storage usage, so monitor performance and disk space after making the change.
Save the changes.
Updating database flags in Cloud SQL typically requires a restart of the instance for the new settings to take effect.
After modifying the temp_file_limit, restart your PostgreSQL instance (if required) and run an extraction in Stitch.
The error message indicates that the temporary file size has exceeded the temp_file_limit in your Google Cloud SQL PostgreSQL database. This limit is set to control the maximum size of temporary files used during query execution.
See Google’s documentation on configuring database flags here:
Configure database flags | cloud.google.com
Automations using Qlik Reporting Service blocks fail with the following:
status:403
forbidden request
Details in the error status messages indicate:
user does not have permissions to export the report
If the automation has previously run successfully and no action has been taken by the Tenant Administrator to use the Qlik Reporting Service content roles:
If the Tenant Administrator has started to apply custom role control:
The introduction of custom roles supporting the Qlik Reporting Service requires the automation owner executing an automation calling the following Qlik Reporting Service Blocks to have the appropriate platform and space roles that govern report execution from a specific app.
See Permissions | help.qlik.com for details.
The blocks:
The Qlik Reporting Service is a value-add service in which customers have long requested the ability to control who can use the services and produce (metered) reports.
The release of custom roles to support Qlik Reporting Service brings consistency to all use cases, ensuring that users executing the report have the appropriate tenant role and space permission to the application to know what report data is being produced.
Does a Qlik Sense pivot table have a hard limit on its dimensions and measures?
Qlik Sense pivot tables have a limit of 1000 measures and 1000 dimensions.
Approaching this limit is not recommended. Managing measures and dimensions of this volume will become difficult and impractical.
Integration fails with the following error:
tap - CRITICAL 'search_prefix'
tap - Traceback (most recent call last):
tap - File "/code/tap-env/bin/tap-s3-csv", line 10, in <module>
tap - sys.exit(main())
tap - ^^^^^^
tap - File "/code/tap-env/lib/python3.12/site-packages/singer/utils.py", line 235, in wrapped
tap - return fnc(*args, **kwargs)
tap - ^^^^^^^^^^^^^^^^^^^^
tap - File "/code/tap-env/lib/python3.12/site-packages/tap_s3_csv/__init__.py", line 81, in main
tap - config['tables'] = validate_table_config(config)
tap - ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
tap - File "/code/tap-env/lib/python3.12/site-packages/tap_s3_csv/__init__.py", line 63, in validate_table_config
tap - table_config.pop('search_prefix')
tap - KeyError: 'search_prefix'
main - INFO Tap exited abnormally with status 1
main - INFO No tunnel subprocess to tear down
main - INFO Exit status is: Discovery failed with code 1 and error message: "'search_prefix'".
Improving the integration to gracefully handle the missing key when an update to the connection/config occurs is currently on the roadmap. The R&D team is working on this behavior and a minor version upgrade is expected down the line; however, there is currently no ETA.
If you encounter this error with your AWS S3 CSV integration, please reach out to Qlik Support for further assistance.
The issue occurs due to missing keys in the configuration. Specifically, an update to the connection settings removed or modified the search_prefix, resulting in the key being absent from the config expected by the integration.
Google Ads extractions fail with:
tap - CRITICAL 504 Deadline Exceeded
tap - grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
tap - status = StatusCode.DEADLINE_EXCEEDED
tap - details = "Deadline Exceeded"
tap - debug_error_string = "UNKNOWN:Error received from peer {grpc_status:4, grpc_message:"Deadline Exceeded"}"
tap - >
main - INFO Exit status is: Discovery succeeded. Tap failed with code 1 and error message: "504 Deadline Exceeded". Target succeeded.
This error is often transient and will not require any action to alleviate. If the error is persistent, review your Tables to Replicate and consider de-selecting unneeded tables or columns. If the issue remains, please reach out to Qlik Support to discuss your use-case and further review the integration settings.
The error "504 Deadline Exceeded" indicates that the extraction timed out due to a lack of response from the Google Ads API. By default, Stitch allows up to 15 minutes for a response before terminating the request.
Possible reasons the Google Ads API may exceed this threshold include:
Stitch Support frequently receives questions regarding the invocation of the PUBLIC role with Snowflake. When setting up a database user following Create a Stitch database and database user (Qlik Stitch Documentation), users will notice that the Stitch user executes GRANT statements on the PUBLIC role. This behavior can raise questions about role-based access and security implications within Snowflake.
Manually adjust permissions in Snowflake as needed. If you prefer Stitch offers a more streamlined approach in its behavior, please submit a feature request. Refer to New Process for Submitting a Feature Request for All Talend Customers and Partners on how to submit a feature request.
By default, Stitch grants the PUBLIC role access to schemas and objects it creates in Snowflake. This behavior often raises questions from users who are concerned about broad access permissions.
The reason Stitch does this is because it cannot assume which specific roles or users in your organization should have access to the data. Granting access to the PUBLIC role ensures that Stitch can write data successfully without making assumptions about your internal role structure.
This default behavior is not a requirement from Snowflake itself, but rather a design decision by Stitch to simplify initial setup and avoid permission-related sync failures.
If this approach does not align with your organization’s security policies, you may manually revoke access from the PUBLIC role after the initial sync. However, this step must be repeated each time a new integration runs or a new schema is created, which may not be scalable.
Snowflake supports granular permission control via the REVOKE command, allowing you to adjust access as needed:
🔗 REVOKE <privileges> … FROM ROLE | docs.snowflake.com
While this manual revocation process works, it requires ongoing attention. If tighter access control is a priority and manual intervention isn’t feasible, you may want to consider alternative destinations or workflows.
Qlik Sense Repository service will log performance counters, and set them up on startup. Unfortunately sometimes the base counters are corrupt so the start can't proceed.
The error in the logs:
fatal exception during startup Cannot load Counter Name data because an invalid index '' was read from the registry. at System.Diagnostics.PerformanceCounterLib.GetStringTable(Boolean isHelp)?? ? at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)?? [...] at System.Threading.ThreadHelper.ThreadStart() 3facc7ff-8275-4847-b3b7-338d32c4d1c5 its the repo checking the counter which doesn't exist
Qlik Sense April 2019 will log information including instructions to resolve the issue. See the related Release Notes for ID QLIK-92800:
"Failed to initialize usage of Windows Performance Counters. Make sure that performance counters are enabled or try rebuilding them with "lodctr /R"
A workaround is provided in repairing the Windows Performance Counters:
Pitfalls:
If you get as a return:
Error: Unable to rebuild performance counter setting from system backup store, error code is 5
then your prompt was not elevated. Elevate the command prompt with administrator permissions.
Ensure that the counters are not disabled in the registry
The counters may be disabled via registry settings. Please check the following registry locations to ensure that the counters have not been disabled.
HKLM\System\CurrentControlSet\Services\%servicename%\Performance
%servicename% represents any service with a performance counter. For example: PerfDisk, PerfOS, etc.
There may be registry keys for "DisablePerformanceCounters" in any of these locations. As per the following TechNet article, this value should be set to 0. If the value is anything other than 0 the counter may be disabled.
Disable Performance Counters
http://technet.microsoft.com/en-us/library/cc784382.aspx
ref. https://support.microsoft.com/en-us/help/2554336/how-to-manually-rebuild-performance-counters-for-windows-server-2008-6
Once done, return to Option 1
Your company might need to migrate its users from an old Active Directory domain to a new one. Sometimes usernames will also be renamed.
In some cases, it won't be possible to use the QMC to perform the migration of a document permission, due to users having the same name in the old and new domain.
If documents are being distributed using the QlikView Publisher functionality, then the DistributionDetail.xml can be edited to have the new and old domain and user names replaced.
Prior to doing this, ensure that a QVPR backup exists.
RecipientName="domain1\user1"
RecipientName="domain2\user2"