Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Search our knowledge base, curated by global Support, for answers ranging from account questions to troubleshooting error messages.
Qlik offers a wide range of channels to assist you in troubleshooting, answering frequently asked questions, and getting in touch with our technical experts. In this article, we guide you through all available avenues to secure your best possible experience.
For details on our terms and conditions, review the Qlik Support Policy.
Index:
We're happy to help! Here's a breakdown of resources for each type of need.
Support | Professional Services (*) | |
Reactively fixes technical issues as well as answers narrowly defined specific questions. Handles administrative issues to keep the product up-to-date and functioning. | Proactively accelerates projects, reduces risk, and achieves optimal configurations. Delivers expert help for training, planning, implementation, and performance improvement. | |
|
|
(*) reach out to your Account Manager or Customer Success Manager
Your first line of support: https://community.qlik.com/
Looking for content? Type your question into our global search bar:
Leverage the enhanced and continuously updated Knowledge Base to find solutions to your questions and best practice guides. Bookmark this page for quick access!
Subscribe to maximize your Qlik experience!
The Support Updates Blog
The Support Updates blog delivers important and useful Qlik Support information about end-of-product support, new service releases, and general support topics. (click)
The Qlik Design Blog
The Design blog is all about product and Qlik solutions, such as scripting, data modelling, visual design, extensions, best practices, and more! (click)
The Product Innovation Blog
By reading the Product Innovation blog, you will learn about what's new across all of the products in our growing Qlik product portfolio. (click)
Q&A with Qlik
Live sessions with Qlik Experts in which we focus on your questions.
Techspert Talks
Techspert Talks is a free webinar to facilitate knowledge sharing held on a monthly basis.
Technical Adoption Workshops
Our in depth, hands-on workshops allow new Qlik Cloud Admins to build alongside Qlik Experts.
Qlik Fix
Qlik Fix is a series of short video with helpful solutions for Qlik customers and partners.
Suggest an idea, and influence the next generation of Qlik features!
Search & Submit Ideas
Ideation Guidelines
Get the full value of the community.
Register a Qlik ID:
Incidents are supported through our Chat, by clicking Chat Now on any Support Page across Qlik Community.
To raise a new issue, all you need to do is chat with us. With this, we can:
Log in to manage and track your active cases in the Case Portal. (click)
Please note: to create a new case, it is easiest to do so via our chat (see above). Our chat will log your case through a series of guided intake questions.
When creating a case, you will be prompted to enter problem type and issue level. Definitions shared below:
Select Account Related for issues with your account, licenses, downloads, or payment.
Select Product Related for technical issues with Qlik products and platforms.
If your issue is account related, you will be asked to select a Priority level:
Select Medium/Low if the system is accessible, but there are some functional limitations that are not critical in the daily operation.
Select High if there are significant impacts on normal work or performance.
Select Urgent if there are major impacts on business-critical work or performance.
If your issue is product related, you will be asked to select a Severity level:
Severity 1: Qlik production software is down or not available, but not because of scheduled maintenance and/or upgrades.
Severity 2: Major functionality is not working in accordance with the technical specifications in documentation or significant performance degradation is experienced so that critical business operations cannot be performed.
Severity 3: Any error that is not Severity 1 Error or Severity 2 Issue. For more information, visit our Qlik Support Policy.
If you require a support case escalation, you have two options:
When other Support Channels are down for maintenance, please contact us via phone for high severity production-down concerns.
A collection of useful links.
Qlik Cloud Status Page
Keep up to date with Qlik Cloud's status.
Support Policy
Review our Service Level Agreements and License Agreements.
Live Chat and Case Portal
Your one stop to contact us.
Executing tasks or modifying tasks (changing owner, renaming an app) in the Qlik Sense Management Console and refreshing the page does not update the correct task status. Issue affects Content Admin and Deployment Admin roles.
The behaviour began after an upgrade of Qlik Sense Enterprise on Windows.
This issue can be mitigated beginning with August 2021 by enabling the QMCCachingSupport Security Rule.
Enable QmcTaskTableCacheDisabled.
To do so:
Upgrade to the latest Service Release and disable the caching functionality:
To do so:
NOTE: Make sure to use lower case when setting values to true or false as capabilities.json file is case sensitive.
Should the issue persist after applying the workaround/fix, contact Qlik Support.
Table of content:
This article provides a step-by-step guide on building a write back solution with only native Qlik components and automations.
Content:
Disclaimer for reporting use cases: this solution could produce inconsistent results in reports produced with automations; when using the button to pass through selections, the intended report composition and associated data reduction for the report may not be achieved. This is due to the fact that the session state of Qlik Application Automation cannot be transferred to the report composition definition that is passed to the Qlik Reporting Service.
When analyzing results in a Qlik Sense app, it could happen you spot a mistake in your data or something that seems odd. To address this, you may want someone from your team to investigate this or you may want to update data in your source systems directly without leaving Qlik. Or maybe your data is just fine but you want to add a new record from within Qlik without having to open your business application. These scenarios fit in the following use cases:
This is the least intrusive form of writing back that delegates the change to someone in your data team. The idea is that you create a ticket in a task management system like Jira or ServiceNow. Someone from your team will then pick up the ticket, investigate your comment, and review the data. The difference with sending an alert or email is that the ticketing system guarantees that the request is tracked.
Another option to communicate changes is to write a comment or a tag for one or more records directly to the source system. This could be a comment on a deal record in your CRM or it could be stored in a separate database table if you're loading data from a database.
The final use case allows for updating records directly from within the sheet. Make sure you know who has access to the button before setting this up since this will allow users to change records directly.
All the above use cases can be realized in the same way: by configuring a native Qlik Sense button in your sheet to run an automation. Before you start this tutorial, make sure you already have an app and a new, empty automation. The tutorial has 2 parts:
To configure the app, we'll use the following native Qlik Sense components:
Steps:
Enable the "Show notifications" toggle, this will send a toast notification back to the user in the sheet after the automation completes. Feel free to increase the duration.
Tip: using a Container component will allow your variable inputs & button to scale better for smaller screens.
Upon automation run, this will resolve to the first text value selected for the field hs_object_id (which corresponds to the deal ID from HubSpot). To update this to a comma-separated list of IDs, the mapping must first be changed to output a list of all values for hs_object_id. To do this, toggle the formula parsing:
Bonus: add a link to the toast notification
Instead of showing a plain message in the toast notification, it's also possible to include a link to point the user to a certain resource. This can be done by configuring the Update Run Title block with the following snippet:
{"message":"Ticket created", "url": "https://<link to jira ticket>"}
Depending on the button's configuration and the automation run mode, use either the Update Run Title block or the Output block to show the toast notification.
See the below table for each option:
Run mode configuration in the automation | Run mode in the button | Block for toast notification | Who can see the notification |
Triggered async | Triggered | Update Run Title | Automation owner only |
Triggered sync | Triggered | Output | Everyone |
Triggered sync | Not triggered | Update Run Title | Automation owner only |
The run mode in the button can be configured by toggling the 'Include Selections' option in the button's settings:
The run mode in the automation can be configured here in the Start block:
After writing back to your source systems, you'll want to do a reload to see your changes reflected in the app. Be mindful of the impact of doing these reloads. If multiple people are using this button at the same time, you don't want to do a reload for each update.
Problems:
Improvements:
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
The BigQuery component encountered an error, which reads:
"BigQueryException: Connection has closed: javax.net.ssl.SSLException: Connection reset."
The error "BigQueryException: Connection has closed: javax.net.ssl.SSLException: Connection reset" typically signifies that the network connection to BigQuery was unexpectedly terminated. This could be due to various network issues, such as transient disruptions or restrictive network configurations like firewalls or proxies that terminate idle connections.
Here are some strategies to manage and potentially resolve this issue:
Implement a Retry Mechanism:
Use a retry mechanism (tBigQueryXxx-->onSubjobError-->) with exponential backoff to handle connection resets gracefully. When a connection reset occurs, catch the exception, wait for a progressively increasing interval, and then retry the operation. This is especially useful for transient network issues.
Review Firewall and Proxy Configurations:
Ensure that all firewalls and proxy servers within your network path are properly configured to permit long-lived connections necessary for your BigQuery operations. These systems may be closing connections that remain idle for an extended period.
Batch Processing with Pagination:
Instead of attempting to load all results at once, consider breaking your query into smaller chunks. You can modify your query to retrieve subsets of the data and process each subset separately. This approach limits the impact of any single query failure.
You might encounter the error "Failed to connect to server The SSL connection could not be established, see inner exception." when trying to connect to a REST API endpoint using the Qlik REST connector in the Qlik Sense cloud services (but if you test the same connection on an on premise Qlik Sense version the connection might still work):
Please note that this kind of unexpected connection errors in the Qlik cloud might be caused due to incompatibility reason with the REST endpoint ciphers and the .NET 5.0 version in the Qlik Sense cloud.
Accordingly we recommend to check what kind of ciphers are required for your endpoint and compare with default ciphers for .NET 5.0. In case of an incompatibility we suggest to upgrade the ciphers on the endpoint or to use the Qlik DataTransfer.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
The Qlik Sense Desktop or Server installation fails with:
INSTALLATION FAILED
AN ERROR HAS OCCURRED
For detailed information see the log file.
The installation logs (How to read the installation logs for Qlik products) will read:
Error 0x80070643: Failed to install MSI package.
Error 0x80070643: Failed to configure per-user MSI package.
Detected failing msi: DemoApps
Error 0x80070643: Failed to install MSI package.
Error 0x80070643: Failed to configure per-user MSI package.
Detected failing msi: SenseDesktop
Applied execute package: SenseDesktop, result: 0x80070643, restart: None
Error 0x80070643: Failed to execute MSI package.
ProgressTypeInstallation
Starting rollback execution of SenseDesktop
CAQuietExec: Entering CAQuietExec in C:\WINDOWS\Installer\MSIE865.tmp, version 3.10.2103.0
CAQuietExec: "powershell" -NoLogo -NonInteractive -InputFormat None
CAQuietExec: Error 0x80070002: Command failed to execute.
CAQuietExec: Error 0x80070002: QuietExec Failed
CAQuietExec: Error 0x80070002: Failed in ExecCommon method
CustomAction CA_ConvertToUTF8 returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox)
MSI (s) (74:54) [10:44:37:941]: Note: 1: 2265 2: 3: -2147287035
MSI (s) (74:54) [10:44:37:942]: User policy value 'DisableRollback' is 0
MSI (s) (74:54) [10:44:37:942]: Machine policy value 'DisableRollback' is 0
Action ended 10:44:37: InstallFinalize. Return value 3.
The Failed to Install MSI Package can have a number of different root causes.
Dependencies may be missing. Install the latest C++ redistributable.
This issue may occur if the MSI software update registration has become corrupted, or if the .NET Framework installation on the computer has become corrupted (source: Microsoft, KB976982).
Repair or reinstall .NET framework.
How to troubleshoot:
Option 1
********************************************
********* Environment Variables **********
********************************************
.......
Path=C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;C:\Users\administrator\AppData\Local\Microsoft\WindowsApps;C:\WINDOWS\system32\inetsrv\
Install the Web Platform Installer Extension that includes the latest components of the Microsoft Web Platform, including Internet Information Services (IIS), SQL Server Express, .NET Framework and Visual Studio.
More information about the tool on the Microsoft page.
Verify if there are pending Windows updates. Complete them and try again.
The installation may fail if the installer is being executed from a network drive. Copy the installer to your local drive.
The Qlik Cloud DynamoDB connector is not able to process and list tables on the Data Preview screen, when DynamoDB tables have secondary indexes added.
The issue is caused by an index being applied on non-existent columns or by indexes which are wrongly defined. It cannot be reproducible with correctly applied indexes on valid columns and valid column data types.
A new Qlik Cloud DynamoDB connector will be released with an updated DynamoDB driver, which intends to fix the issue when setting up a connection between Qlik Cloud and AWS DynamoDB.
Information provided on this defect is given as is at the time of documenting. For up to date information, please review the most recent Release Notes, or contact support with the ID QB-26946 for reference.
Remove the index is applied on non-existent columns or indexes that are wrongly defined.
Qlik team is actively working with DynamoDB and will release the new DynamoDB connector with updated DynamoDB driver.
Product Defect ID: QB-26949
{error":"Migration failed, please see migraion logs for more details.","returnCode":1}
Upon reviewing the migration logs, noticed the following error.
Can't create Quartz tables: org.hibernate.exception.SQLGrammarException: could not execute statement
database schema migration failed.
javax.persistence.PersistenceException: org.hibernate.exception.SQLGrammarException: could not execute statement
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:154)
at org.talend.migration.quartz.QuartzMigrationUtils.<init>(QuartzMigrationUtils.java:79)
at org.talend.migration.TalendMigrationApplication.call(TalendMigrationApplication.java:320)
at org.hibernate.exception.internal.SQLStateConversionDelegate.convert(SQLStateConversionDelegate.java:103)
org.hibernate.engine.query.spi.NativeSQLQueryPlan.performExecuteUpdate(NativeSQLQueryPlan.java:10)
at org.hibernate.internal.SessionImpl.executeNativeUpdate(SessionImpl.java:1509)
at
Caused by: org.postgresql.util.PSQLException: ERROR: relation "qrtz_job_details" already exists
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2725)
Note: Quartz can store Job and scheduling information in a relational database and quartz can automatically create tables with initialize-schema
Verify the list of database tables and rename or drop any tables that contain the prefix "qrtz" in their names.
Migrating database X to database Y
What does each table for quartz scheduler signify?
Talend Cloud platform provides computational capabilities that allow organizations to securely run data integration processes natively from cloud to cloud, on-premises to cloud, or cloud to on-premises environments.
These capabilities are powered by compute resources, commonly known as Engines. This article covers the four basic types.
Content:
A Cloud Engine is a compute resource managed by Talend in Talend Cloud that executes Job tasks.
A capability in Talend Cloud platform that allows you to securely run data integration Jobs natively from cloud to cloud, on-premises to cloud, or cloud to on-premises environments completely within your environment for enhanced performance and security, without transferring the data through the Cloud Engines in Talend Cloud platform.
Java-based runtime (similar to a Cloud Engine) to execute Talend Jobs on-premises or on another cloud platform that you control.
A Remote Engine Gen2 is a secure execution engine on which you can safely execute data pipelines (that is, data flows designed using Talend Pipeline Designer). It allows you to have control over your execution environment and resources because you can create and configure the engine in your own environment (Virtual Private Cloud or on-premises). Previously referred to as Remote Engines for Pipelines, this engine was renamed Remote Engine Gen2 during H1/2020. It is a Docker-based runtime to execute data pipelines on-premises or on another cloud platform that you control.
A Remote Engine Gen2 ensures:
Cloud Engine for Design is a built-in runner that allows you to easily design pipelines without setting up any processing engines. With this engine you can run two pipelines in parallel. For advanced processing of data, Talend recommends installing the secure Remote Engine Gen2.
The following table lists a comparative perspective between the two engines:
Cloud Engine (CE) |
Remote Engine (RE) |
Consumes 45,000 engine tokens |
Consumes 9,000 engine tokens |
Runs within Talend Cloud platform – no download required |
Downloadable software from Talend Cloud platform |
Managed by Talend, run on-demand as needed to execute Jobs |
Managed by the customer |
No customer resources required |
Customer can run on Windows, Linux, or OS X |
Set physical specifications (Memory, CPU, Temp Disk Space) |
Unlimited Memory, CPU, and Temp Space |
Require data sources/targets to be visible through the internet to the Cloud Engine |
Hybrid cloud or on-premises data sources |
Restricted to three concurrent Jobs |
Unlimited concurrent Jobs (default three) |
Available within Talend Cloud portal |
Available in AWS and Azure Marketplace |
Runs natively within Talend Cloud iPaaS infrastructure |
Uses HTTPS calls to Talend Cloud service to get configuration information and Job definition and schedules |
Cloud Engine for Design (CE4D) |
Remote Engine Gen 2 (REG2) |
Consumes zero engine tokens |
Consumes 9000 engine tokens |
Build upon a Docker-compose stack |
Build upon a Docker-compose stack |
Available as Cloud Image and Instantiated in Talend Cloud platform on behalf of the customer |
Available as an AMI Cloud Formation Template (for AWS) and Azure Image (for Azure) |
Not available as downloadable software as this type of engine is only suitable for design using Pipeline Designer in Talend Cloud portal |
Available as .zip or .tar.gz (for local deployment) |
A Cloud Engine for Design is included with Talend Cloud platform, to offer a serverless experience during design and testing. However, it is not meant for production (that is, not for running pipelines in non-development environments). It won’t scale for prod-size volumes and long-running pipelines. It should be used for design teams to get a preview working and test execution during development. This engine should not be used for production execution. |
It is used to run artifacts, tasks, preparations, and pipelines in the cloud, as well as creating connections and fetching data samples. |
Static IPs cannot be enabled for CE4D within Talend Management Console |
Not applicable as REG2 runs outside Talend Management Console (that is, in Customer Data Center) |
Additional engines (CE or RE) may be required if you have one or more of the following use cases:
These use cases depend on the deployment architecture in the specific customer environment and layout of the Remote Engine at the environment or workspace level configurations. This would need proper capacity planning and automatic horizontal and vertical scaling of the compute Engines.
Question |
Guideline |
How much data must be transferred per hour? |
Each Cloud Engine can transfer 225 GB per hour. |
How many separate flows can run in parallel? |
Each Cloud Engine can run up to three flows in parallel. |
How much temporary disk space is needed? |
Each Cloud Engine has 200GB of temp space. |
How CPU and memory intensive are the flows? |
Each Cloud Engine provides 8 GB of memory and two vCPU. This is shared among any concurrent flows. |
Are separate execution environments required? |
Many users desire separate execution for QA/Test/Development and Production. If this is needed, additional Cloud Engines should be added as required. |
If a source or target system is not accessible through the internet:
If one of the systems is not accessible using the internet, then a Remote Engine is needed.
When single flow requirements exceed the capacity of a Talend Cloud Engine:
If the Cloud Engine is too small (for example, the maximum memory of 5.25 GB, temporary space of 200 GB, two vCPU, or the maximum of 225 GB per hour) then, a Remote Engine is needed.
If a native driver is required:
If the solution requires a native driver, which is not part of the Talend action or Job generated code, a typical case for this is SAP with the JCO v3 Library, MS SQL Server Windows Authentication, then a Remote Engine is needed.
Data jurisdiction, security, or compliance reasons:
It may be desirable or required to retain data in a particular region or country for data privacy reasons. The data being processed may be subject to regulations such as PCI or HIPAA, or it may be more efficient to process the data within a single data center or public cloud location. These are all valid reasons to use a Remote Engine.
Cloud Engine (CE) |
Remote Engine (RE) |
Remote Engine Gen 2 (REG2) |
Cloud Engines allow you to run batch tasks that use on-premises or cloud applications and datasets (sources, targets) |
Remote Engines allow you to run batch tasks or microservices (APIs or Routes) that use on-premises or cloud applications and datasets (sources, targets) |
The Remote Engine Gen2 is used to run artifacts, tasks, preparations, and pipelines in the cloud, as well as creating connections and fetching data samples |
Consumes 45,000 engine tokens |
Consumes 9,000 engine tokens |
Consumes 9,000 engine tokens |
No download required - Runs within Talend Cloud platform |
Downloadable software from Talend Cloud platform |
Downloadable software from Talend Cloud platform |
Managed by Talend, run on-demand as needed to execute Jobs |
Managed by the customer |
Managed by the customer |
No customer resources required |
Can run on Windows, Linux, or OS X |
Require compatible Docker and Docker compose versions for Linux, Mac, and Windows |
Set physical specifications (Memory, CPU, and Temp Disk Space) |
Unlimited Memory, CPU, and Temp Space |
Unlimited Memory, CPU, and Temp Space |
Require data sources/targets to be visible through the internet to the Cloud Engine |
Hybrid cloud or on-premises data sources |
Hybrid cloud or on-premises data sources |
Restricted to three concurrent Jobs |
Unlimited concurrent Jobs (default three) |
Unlimited concurrent pipelines (configurable) |
Available within Talend Cloud portal |
Available in AWS and Azure Marketplace |
Available as an AMI Cloud Formation Template (for AWS) and Azure Image (for Azure) |
Runs natively within Talend Cloud iPaaS infrastructure |
Uses HTTPS calls to Talend Cloud service to get configuration information and Job definition and schedules |
Uses HTTPS calls to Talend Cloud service to get configuration information and pipeline definition and schedules |
Talend Help Center documentation:
When synchronizing Qlik Sense with Active Directory, you may encounter an error message saying "the User Directory Connector (UDC) is not configured, because the following error occurred: Setting up connection to LDAP root node failed. Check log file"
This often indicates a log on failure, i.e. the username and/or password is wrong.
A common cause for this is wrong username and/or password.
The HubSection_Home resource filter in Qlik Sense refers to the button which allows a user to navigate back to the Hub from inside of an application.
Default ruleset:
If an administrator should want to disable this functionality for their users, for example, if the application is embedded into another page. Then they will want to disable the default rule named HubSections.
The result with this rule disabled is as follows for the end user:
The result of this change will disable this functionality for all users. If an administrator wants to provide this functionality to a select set of users then the administrator can create a new rule in this schema:
Using a Salesforce source endpoint, especially while using the Incremental Load source endpoint, all UPDATE operations are treated as INSERT operations for the table "UserRole". This leads to duplicate IDs found from the target table in the CDC processing stage.
Set the task to UPSERT mode with Apply Conflicts set to Update the existing target record and Insert the missing target record. For more information see Apply Conflicts.
Note: this WA applied to Apply Change Mode, if the 'store changes' are enabled, duplicate ID is presented in __ct table still.
71 objects are missing the "CreatedDate" system field (including tables "AccountShare", "UserLogin", "UserRole", and similar). This is why Qlik Replicate can't identify if the change is inserted or updated, leading to both INSERT and UPDATE being converted to INSERT operation for these tables.
00294581
A task containing tMysqlOutput component, which performs insert/update operations, has been blocked/suspended due to a PAGEIOLATCH_SH wait type status and has been pending for several hours.
Often reasons for excessive PAGEIOLATCH_SH wait type are:
To resolve the issue of high PAGEIOLATCH_SH wait type, you can check the following:
Always keep in mind that in case of high safety Mirroring or synchronous-commit availability in AlwaysOn AG, increased/excessive PAGEIOLATCH_SH can be expected.
Based on the SQL query check we figured out the avg_fragmentation_in_percent showing 90%+ , which means the index is maintenance badly.
USE DBName;
GO
-- Find the average fragmentation percentage of all indexes
-- in the HumanResources.Employee table.
SELECT a.index_id, name, avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats (DB_ID(N'DBName'),
OBJECT_ID(N'dbo.TableName'), NULL, NULL, NULL) AS a
JOIN sys.indexes AS b
ON a.object_id = b.object_id AND a.index_id = b.index_id;
GO
==Detecting Fragmentation==
The first step in deciding which defragmentation method to use is to analyze the index to determine the degree of fragmentation. By using the system function sys.dm_db_index_physical_stats, you can detect fragmentation in a specific index, all indexes on a table or indexed view, all indexes in a database, or all indexes in all databases. For partitioned indexes, sys.dm_db_index_physical_stats also provides fragmentation information for each partition.
The result set returned by the sys.dm_db_index_physical_stats function includes the following columns.
Column | Description |
---|---|
avg_fragmentation_in_percent | The percent of logical fragmentation (out-of-order pages in the index) |
fragment_count | The number of fragments (physically consecutive leaf pages) in the index |
avg_fragment_size_in_pages | Average number of pages in one fragment in an index |
avg_fragmentation_in_percent value | Corrective statement |
---|---|
> 5% and < = 30% | ALTER INDEX REORGANIZE |
> 30% | ALTER INDEX REBUILD WITH (ONLINE = ON)* |
* Rebuilding an index can be executed online or offline. Reorganizing an index is always executed online. To achieve availability similar to the reorganize option, you should rebuild indexes online.
These values provide a rough guideline for determining the point at which you should switch between ALTER INDEX REORGANIZE and ALTER INDEX REBUILD. However, the actual values may vary from case to case. It is important that you experiment to determine the best threshold for your environment. Very low levels of fragmentation (less than 5 percent) should not be addressed by either of these commands because the benefit from removing such a small amount of fragmentation is almost always vastly outweighed by the cost of reorganizing or rebuilding the index.
In general, fragmentation on small indexes is often not controllable. The pages of small indexes are sometimes stored on mixed extents. Mixed extents are shared by up to eight objects, so the fragmentation in a small index might not be reduced after reorganizing or rebuilding the index.
A task using ODBC as a target fails after an upgrade to 2023.5 or later versions:
Errors 00001888: 2024-09-09T10:31:56:332057 [TARGET_LOAD ]E: Failed (retcode -1) to execute statement: CREATE SCHEMA "CLILIBF" [1022502] (ar_odbc_stmt.c:5082)
00001888: 2024-09-09T10:31:56:332057 [TARGET_LOAD ]E: RetCode: SQL_ERROR SqlState: 42000 NativeError: -552 Message: [IBM][System i Access ODBC Driver][DB2 for i5/OS]SQL0552 - Not authorized to CREATE DATABASE. [1022502] (ar_odbc_stmt.c:5090)
Before the upgrade, the tasks may have encountered the same error but could be run using a Full Load.
To resolve the issue, we will modify an Internal Parameter to the endpoints settings, as well as set a provider syntax.
Replicate Upgrade. Oracle to ODBC(iSeries) on 2023.5 will not do a Full Load.
QB-29117
Question I
What are the compatible operating systems?
The compatible operating systems can be found in the article: compatible-operating-systems.
Please check the section: Talend Remote Engine.
Question II
What are the compatible Java environments?
Java 8, Java 11, Java 17 can be used for task executions. By default, Talend Remote Engine uses Java 17 to run tasks. The compatible Java environments can be found in the article: launching-talend-cloud-remote-engine-for-aws-via-cloudformation.
Please check the section: Procedure ⇒ Step 10.
Furthermore, there are no differences whether Talend Cloud Remote Engine for AWS is launched using Cloud Formation or AMI.
Question III
Where is the Remote Engine installed?
It is installed in the following directory.
/opt/talend/ipaas/remote-engine-client/
Question IV
Where can I find the settings file to make parameter changes?
The settings file is located in the following directory.
/opt/talend/ipaas/remote-engine-client/etc/
Question V
Where can I find further details on Talend Cloud Remote Engine for AWS?
Please refer to Qlik Talend Documentation: talend-cloud-remote-engine-for-aws.
For general advice on how to troubleshoot Qlik Replicate latency issues, see Troubleshooting Qlik Replicate Latency and Performance Issues.
If your task shows latency issues, one of the first things to do is to set the logging component performance to trace and run the task you identified for five to 10 minutes and review the resulting task log.
We advise you to:
This will list all available latency information. We can now identify a trend.
Remember, Target latency = Source latency + Handling latency.
[PERFORMANCE ]T: Source latency 0.00 seconds, Target latency 0.00 seconds, Handling latency 0.00 seconds (replicationtask.c:3703)
The source, target, and handling latency are all at 0 seconds.
[PERFORMANCE ]T: Source latency 7634.89 seconds, Target latency 7634.89 seconds, Handling latency 0.00 seconds (replicationtask.c:3793)
[PERFORMANCE ]T: Source latency 7663.00 seconds, Target latency 7663.00 seconds, Handling latency 0.00 seconds (replicationtask.c:3793)
[PERFORMANCE ]T: Source latency 7690.12 seconds, Target latency 7693.12 seconds, Handling latency 3.00 seconds (replicationtask.c:3793)
[PERFORMANCE ]T: Source latency 7710.25 seconds, Target latency 7723.25 seconds, Handling latency 13.00 seconds (replicationtask.c:3793)
The source latency is higher than the handling latency. The key point is to look at handling latency, it must be lower than the source latency.
Cause:
If the source latency decreases during your monitoring, it is a good sign that the latency will recover; if it increases, review the causes mentioned above and resolve any outstanding source issues. You will want to consider reloading the task.
[PERFORMANCE ]T: Source latency 2.05 seconds, Target latency 7116.05 seconds, Handling latency 7114.00 seconds (replicationtask.c:3793)
[PERFORMANCE ]T: Source latency 2.77 seconds, Target latency 7150.77 seconds, Handling latency 7148.00 seconds (replicationtask.c:3793)
[PERFORMANCE ]T: Source latency 2.16 seconds, Target latency 7182.16 seconds, Handling latency 7180.00 seconds (replicationtask.c:3793)
The target latency is higher than the source latency.
Cause:
If the target latency continues to increase, consider reloading the task.
Identifying whether or not you are looking at handling latency or target latency can be tricky. When the task has target latency, the queue is blocked, so the handling latency will automatically be higher as well (remember: Target latency = Source latency + Handling latency).
The key point to decide if it is handling latency is to check if there are a lot of swap files saved in the sorter folder inside the task folder of the Qlik Replicate server.
In addition, if the task log shows when the task is resumed, the handling latency increases dramatically from 0 seconds (or a low number) to a much higher value in a very short time. This can then be clearly identified as a handling latency:
2023-05-10T08:21:02:537595 [PERFORMANCE ]T: Source latency 5.54 seconds, Target latency 5.54 seconds, Handling latency 0.00 seconds (replicationtask.c:3788)
2023-05-10T08:21:32:610230 [PERFORMANCE ]T: Source latency 4.61 seconds, Target latency 55363.61 seconds, Handling latency 55359.00 seconds (replicationtask.c:3788)
This log shows handling latency increased from 0 seconds to 55359 seconds after only 30 seconds of a task's runtime. This is because Qlik Replicate will read all the swap files into memory when the task is resumed. In this situation, you need to reload the task or resume the task from a timestamp or stream position.
After a DB2 LUW upgrade from 11.1 to 11.5, Qlik Replicate tasks which read from the upgraded DB2 LUW Source Endpoint fail with the error:
[SOURCE_CAPTURE ]I: Error reading log buffer (db2luw_endpoint_proc.c:679)
[SOURCE_CAPTURE ]E: Error at 'Reading log records': Unexpected Error. Original SQLCODE -1263: ' message SQL1263N The archive log file "S0534570.LOG" is not a valid log file for
database "R3P" on database partition "0" and log stream "0".
Perform a reload of the Qlik Replicate Task(s) and Table(s) which use the DB2 LUW Source Endpoint.
Upgrading a DB2 LUW environment changes the DB2 Transaction log format.