Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Extracting data to Qlik Sense Enterprise on Windows from an SAP BW InfoProvider / ASDO with two values in a WHERE clause fails with an error similar to the following:
Data load did not complete
Data has not been loaded. Please correct the error and try loading again.
Using only one field value in the WHERE clause, it works as expected.
When using Google Cloud Storage as a target in Qlik Replicate, and the target File Format is set to Parquet, an error may occur if the incoming data contains invalid values.
This happens because the Parquet writer validates data during the CSV-to-Parquet conversion. A typical error looks like:
[TARGET_LOAD ]E: Failed to convert file from csv to parquet
Error:: failed to read csv temp file
Error:: std::exception [1024902] (file_utils.c:899)
There are two possible solutions:
In this case, the source is SAP Oracle, and a few rare rows contained invalid date values. Example: 2023-11-31.
By enabling the internal parameters keepCSVFiles and keepErrorFiles in the target endpoint (both set to TRUE), you can inspect the generated CSV files to identify which rows contain invalid data.
00417320
Recent versions of Qlik connectors have an out-of-the-box value of 255 for their DefaultStringColumnLength setting.
This means that, by default, any strings containing more than 255 characters is cut when imported from the database.
To import longer strings, specify a higher value for DefaultStringColumnLength.
This can be done in the connection definition and the Advanced Properties, as shown in the example below.
The maximum value that can be set is 2,147,483,647.
How does Qlik Replicate convert DB2 commit timestamps to Kafka message payload, and why are we seeing a lag of several hours?
Qlik Geocoding operates using two QlikGeoAnalytics operations: AddressPointLookup and PointToAddressLookup.
Two frequently asked questions are:
The Qlik Geocoding add-on option requires an Internet connection. It is, by design, an online service. You will be using Qlik Cloud (https://ga.qlikcloud.com), rather than your local GeoAnalytics Enterprise Server.
See the online documentation for details: Configuring Qlik Geocoding.
This article outlines how to handle DDL changes on a SQL Server table as part of the publication.
The steps in this article assume you use the task's default settings: full load and apply changes are enabled, full load is set to drop and recreate target tables, and DDL Handling Policy is set to apply alter statements to the target.
To achieve something simple, such as increasing the length of a column (without changing the data type), run an ALTER TABLE command on the source while the task is running, and it will be pushed to the target.
For example: alter table dbo.address alter column city varchar(70)
To make more complicated changes to the table, such as:
Follow this procedure:
When connecting to Microsoft OneDrive using either Qlik Cloud Analytics or Qlik Sense Enterprise on Windows, shared files and folders are no longer visible.
While the endpoint may intermittently work as expected, it is in a degraded state until November 2026. See drive: sharedWithMe (deprecated) | learn.microsoft.com. In most cases, the API endpoint is no longer accessible due to the publicly documented degraded state.
Qlik is actively reviewing the situation internally (SUPPORT-7182).
However, given that the MS API endpoint has been deprecated by Microsoft, a Qlik workaround or solution is not certain or guaranteed.
Use a different type of shared storage, such as mapped network drives, Dropbox, or SharePoint, to name a few.
Microsoft deprecated the /me/drive/sharedWithMe API endpoint.
SUPPORT-7182
Extracting data from SAP BW InfoProvider / ASDO with two values in a WHERE clause returns 0 lines.
Example:
The following is a simple standard ADSO with two infoObjects (‘0BPARTNER’, ‘/BIC/TPGW8001’,) and one field (‘PARTNUM’). All are CHAR data type.
In the scripts we used [PARTNUM] = A, [PARTNUM] = B in the WHERE clause.
However
WHERE clause, it works as expected:From GTDIPCTS2Where ([PARTNUM] = A); [TPGW8001] instead of the field [PARTNUM] in the WHERE clause, it also works as expected:From GTDIPCTS2Where ([TPGW8001] = “1000”, [TPGW8001] = “1000”);
Upgrade to Direct Data Gateway version 1.7.8.
Defect SUPPORT-5101
SUPPORT-5101
When using Google Cloud Pub/Sub as a target and configuring Data Message Publishing to Separate topic for each table, the Pub/Sub topic may be unexpectedly dropped if a DROP TABLE DDL is executed on the source. This occurs even if the Qlik Replicate task’s DDL Handling Policy When source table is dropped is set to Ignore DROP.
This issue has been fixed in the following builds:
To apply the fix, upgrade Qlik Replicate to one of the listed versions or any later release.
A product defect in versions earlier than 2025.5 SP3 causes the Pub/Sub topic to be dropped despite the DDL policy configuration.
A Job design is presented below:
tSetKeystore: set the Kafka truststore file.
tKafkaConnection, tKafkaInput: connect with Kafka Cluster as a Consumer and transmits messages.
However, while running the Job, an error exception occurred under the tKafkaInput component.
org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
Make sure to execute the tSetKeyStore component prior to the Kafka components to enable the Job to locate the certificates required for the Kafka connection. To achieve this, connect the tSetKeystore component to tkafkaConnection using an OnSubjobOK link, as demonstrated below:
For more detailed information on trigger connectors, specifically OnSubjobOK and OnComponentOK, please refer to this KB article: What is the difference between OnSubjobOK and OnComponentOK?.
This article addresses the error encountered during extraction when using log-based incremental replication for MySQL integration:
[main] tap-hp-mysql.sync-strategies.binlog - Fatal Error Occurred - <ColumnName> - decimal SQL type for value type class clojure.core$val is not implemented.
There are two recommended approaches:
Option 1: Enable Commit Order Preservation
Run the following command in your MySQL instance:
SET GLOBAL replica_preserve_commit_order = ON;
Then, reset the affected table(s) through the integration settings.
Option 2: Validate Replication Settings
Ensure that either:
replica_preserve_commit_order (MySQL 8.0+), orslave_preserve_commit_order (older versions)is enabled. These settings maintain commit order on multi-threaded replicas, preventing gaps and inconsistencies.
Run:
SHOW GLOBAL VARIABLES LIKE 'replica_preserve_commit_order';
Expected Output:
Variable_name |
Value |
replica_preserve_commit_order |
ON |
For older versions:
SHOW GLOBAL VARIABLES LIKE 'slave_preserve_commit_order';
For more information, reference MySQL Documentation
replication-features-transaction-inconsistencies | dev.mysql.com
When using log-based incremental replication, Stitch reads changes from MySQL’s binary log (binlog). This error occurs because the source database provides events out of order, which leads to mismatched data types during extraction. In this case, the extraction encounters a decimal SQL type where the value type is unexpected.
Why does this happen?
This article explains whether changing integration credentials or the host address for a database integration requires an integration reset in Stitch. It will also address key differences between key-based incremental replication and log-based incremental replication.
Updating credentials (e.g., username or password) does not require an integration reset. Stitch will continue replicating data from the last saved bookmark values for your tables according to the configured replication method.
Changing the host address is more nuanced and depends on the replication method:
Important:
If the database name changes, Stitch treats it as a new database:
| Change Type | Key-Based Replication | Log-Based Replication |
| Credentials | No reset required | No reset required |
| Host Address | No reset (if search path unchanged) | Reset required |
| Database Name | Reset required | Reset required |
MySQL extraction encounters the following error:
FATAL [main] tap-hp-mysql.main - Fatal Error Occurred - YEAR
YEAR(date_column) < 1 OR YEAR(date_column) > 9999
0000-00-00), adjust SQL mode or replace with valid dates.
This error occurs when the MySQL integration attempts to process a DATE, DATETIME, or TIMESTAMP field containing an invalid year value. Common examples include 0 or any year outside the supported range. The error message typically states "Fatal Error Occurred" followed by details about the invalid year or month value.
The underlying Python library used by the Stitch MySQL integration enforces strict date parsing rules. It only supports years in the range 0001–9999. If the source data contains values less than 0001 or greater than 9999, the extraction will error. This issue often arises from legacy data, zero dates (0000-00-00), or improperly validated application inserts.
Any column selected for replication that contains invalid date values will trigger this error.
Loading Error Across All Destinations
When Stitch tries to insert data into a destination table and encounters a NOT NULL constraint violation, the error message typically looks like:
ERROR: null value in column "xxxxx" of relation "xxxxx" violates not-null constraint
or
ERROR: null value in column "xxxxx" violates not-null constraint
Key Points
_sdc_level_1_id._sdc_level_id columns help form composite keys for nested records and are used to associate child records with their parent. Stitch generates these values sequentially for each unique record._sdc_source_key_[name] columns, they create a unique identifier for each row. Depending on nesting depth, multiple _sdc_level_id columns may exist in subtables.
Recommended Approach
Pause the integration, drop the affected table(s) from the destination, and reset the table from the Stitch UI. If you plan to change the PK on the table, you must either:
If residual data in the destination is blocking the load, manual intervention may be required. Contact Qlik Support if you need assistance clearing this data.
Primary Key constraints enforce both uniqueness and non-nullability. If a null value exists in a PK field, the database rejects the insert because Primary Keys cannot contain nulls.
If you suspect your chosen PK field may contain nulls, you can:
The NetSuite integration encounters the following extraction error:
[main] tap-netsuite.core - Fatal Error Occured - Request failed syncing stream Transaction, more info: :data {:messages ({:code {:value UNEXPECTED_ERROR}, :message "An unexpected error occurred. Error ID: <ID>", :type {:value ERROR}})}
The extraction error message provides limited context beyond the NetSuite Error ID. It is recommend reaching out to NetSuite Support with the Error ID for further elaboration and context.
This error occurs when NetSuite’s API returns an UNEXPECTED_ERROR during pagination while syncing a stream. It typically affects certain records within the requested range and is triggered by problematic records or internal processing issues during large result set pagination.
Potential contributing factors include
Access Denied
Or
You will not see any files/folders
The username or password is incorrect
When the network share is password-protected, even if the service account has access to it, you must first enter the network share from the Qlik Sense Windows machine/s with credentials to access it.
Note that for every reboot, the password-protected folder will prompt for credentials, causing the Folder Data Connection to fail since Folder Data Connection does not have an option to save user credentials.
Talend Data Catalog (TDC) to Qlik Sense using certificate authentication, the connection test shows as successful. However, when attempting to fetch applications/streams, the process fails and no applications are listed after completing the harvest of Qlik Sense Bridge.
HarvestofQlikSenseBridge
The Qlik Sense user directory is required for connecting to the Qlik Sense Server with the appropriate user ID. See the "Users" page of the Qlik Management Console
QlikSenseUserDirectory
For Example,
INTERNALUser
AppsandStreams
This issue is caused by the user directory was not mentioned correctly.
For more information about MIMB Import Bridge from Qlik Sense Server, please refer to documentation:
MIRQlikSenseServerImport.html | www.metaintegration.net
The following is not being handled properly by Qlik Replicate and leads to a task crashing without errors:
This is caused by SUPPORT-6402, which has been resolved.
Upgrade Qlik Replicate to patch May 2025 SP01 or above.
When more columns are added to a table with invisible columns, Qlik Replicate cannot process the delete statements as expected.
SUPPORT-6402
Previous versions of Microsoft SQL Server do not support a dedicated JSON data type.
For later versions, Microsoft announced the introduction of a native JSON data type (along with JSON aggregate functions). This new data type is already available in Azure SQL Database and Azure SQL Managed Instance, and is included in SQL Server 2025 (17.x).
SQL Server 2025 (17.x) became Generally Available (GA) on November 18, 2025.
At this time, the current Qlik Replicate major releases 2025.05/2025.11 do not support SQL Server 2025 or its native JSON data type yet.
During the endpoint connection ping test, you may encounter:
SYS-E-HTTPFAIL, Unsupported server/database version: 0.
SYS,GENERAL_EXCEPTION,Unsupported server/database version: 0
Since the Azure SQL Database version is always 14.x, the version check succeeds. However, Azure SQL DB already uses the SQL Server 2025 kernel, the task later fails during runtime with:
[SOURCE_CAPTURE ]T: Failed to set ct table column ids for ct table with id '1021246693' (sqlserver_mscdc.c:2968)
[SOURCE_CAPTURE ]T: Failed to get change tables IDs for capture list [1000100] (sqlserver_mscdc.c:3672)
[SOURCE_CAPTURE ]E: Failed to get change tables IDs for capture list [1000100] (sqlserver_mscdc.c:3672)
No workaround can be provided until support has been introduced.
According to the current roadmap, support for SQL Server 2025 and the native JSON data type is planned for the upcoming major release: Qlik Replicate 2026.5.
No date or guaranteed timeframe can yet be given. The support planned for 2026.5 is an estimate.
00419519