Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
A single table needs to be used multiple times as an input. However, when using the same table within the same tMap component, it becomes difficult to distinguish between each usage. This issue exists across all tMap-related components, such as tELTGreenplumMap.
Use the same table as multiple inputs by adding it multiple times in the tMap component. Assign a different alias to each instance of the table so they can be clearly distinguished and used independently.
Why does Qlik Replicate perform a full table scan on an Oracle source table PK defined as REAL/FLOAT?
Avoid using REAL/FLOAT data types for table PKs with LOBs in the tables.
Using an Oracle source, if the table PK in the source Oracle is defined as REAL/FLOAT and the table has LOBs, each lookup will become a full table scan due to host variable binding data type.
A Qlik Replicate task fails with the error:
[TARGET_APPLY ]I: Failed executing truncate table statement: TRUNCATE TABLE "PAYOR_DW"."attrep_changesBF9CC327_0000069" [1020403] (oracle_endpoint_load.c:856) 00006000: 2025-09-27T18:41:59 [TARGET_APPLY ]I: ORA-03135: connection lost contact Process ID: 19637 Session ID: 1905 Serial number: 3972 [1022307] (oracle_endpoint_load.c:862) 00006000: 2025-09-27T18:41:59 [TARGET_APPLY ]I: Failed to truncate net changes table [1022307] (oracle_endpoint_bulk.c:1162) 00006000: 2025-09-27T18:41:59 [TARGET_APPLY ]I: Error executing command [1022307] (streamcomponent.c:1987) 00006000: 2025-09-27T18:41:59 [TASK_MANAGER ]I: Stream component failed at subtask 0, component st_0_PCA UAT DW Target [1022307] (subtask.c:1474)
The issue is that there is a database-level trigger to monitor drop/truncate and alter statements by name "TSDBA.AUDIT_DDL_TRG" which is currently invalid.
A possible fix is to validate this trigger and also add logic in this trigger to not consider attrep_changes% tables, as this is just a temp table for Qlik Replicate batch processing.
When creating a task, the following error message appears in the dataset information section (where the table column names are displayed).
Error:
converter xxxx not found
ConverterError
Set correct CCSID to Character set mapping in the task settings.
For Example:
set 1027,939 in CCSID to Character set mapping
Qlik Talend Data Integration depends on standard ICU modules to perform code-page conversion for IBM DB2 for z/OS and IBM DB2 for iSeries data. These conversion modules are provided by ICU itself.
However, certain converter modules may be missing in the environment. When this occurs, the affected CCSID must be mapped manually to a compatible superset to ensure proper character conversion.
Extracting data to Qlik Sense Enterprise on Windows from an SAP BW InfoProvider / ASDO with two values in a WHERE clause fails with an error similar to the following:
Data load did not complete
Data has not been loaded. Please correct the error and try loading again.
Using only one field value in the WHERE clause, it works as expected.
When using Google Cloud Storage as a target in Qlik Replicate, and the target File Format is set to Parquet, an error may occur if the incoming data contains invalid values.
This happens because the Parquet writer validates data during the CSV-to-Parquet conversion. A typical error looks like:
[TARGET_LOAD ]E: Failed to convert file from csv to parquet
Error:: failed to read csv temp file
Error:: std::exception [1024902] (file_utils.c:899)
There are two possible solutions:
In this case, the source is SAP Oracle, and a few rare rows contained invalid date values. Example: 2023-11-31.
By enabling the internal parameters keepCSVFiles and keepErrorFiles in the target endpoint (both set to TRUE), you can inspect the generated CSV files to identify which rows contain invalid data.
00417320
Recent versions of Qlik connectors have an out-of-the-box value of 255 for their DefaultStringColumnLength setting.
This means that, by default, any strings containing more than 255 characters is cut when imported from the database.
To import longer strings, specify a higher value for DefaultStringColumnLength.
This can be done in the connection definition and the Advanced Properties, as shown in the example below.
The maximum value that can be set is 2,147,483,647.
How does Qlik Replicate convert DB2 commit timestamps to Kafka message payload, and why are we seeing a lag of several hours?
Qlik Geocoding operates using two QlikGeoAnalytics operations: AddressPointLookup and PointToAddressLookup.
Two frequently asked questions are:
The Qlik Geocoding add-on option requires an Internet connection. It is, by design, an online service. You will be using Qlik Cloud (https://ga.qlikcloud.com), rather than your local GeoAnalytics Enterprise Server.
See the online documentation for details: Configuring Qlik Geocoding.
This article outlines how to handle DDL changes on a SQL Server table as part of the publication.
The steps in this article assume you use the task's default settings: full load and apply changes are enabled, full load is set to drop and recreate target tables, and DDL Handling Policy is set to apply alter statements to the target.
To achieve something simple, such as increasing the length of a column (without changing the data type), run an ALTER TABLE command on the source while the task is running, and it will be pushed to the target.
For example: alter table dbo.address alter column city varchar(70)
To make more complicated changes to the table, such as:
Follow this procedure:
When connecting to Microsoft OneDrive using either Qlik Cloud Analytics or Qlik Sense Enterprise on Windows, shared files and folders are no longer visible.
While the endpoint may intermittently work as expected, it is in a degraded state until November 2026. See drive: sharedWithMe (deprecated) | learn.microsoft.com. In most cases, the API endpoint is no longer accessible due to the publicly documented degraded state.
Qlik is actively reviewing the situation internally (SUPPORT-7182).
However, given that the MS API endpoint has been deprecated by Microsoft, a Qlik workaround or solution is not certain or guaranteed.
Use a different type of shared storage, such as mapped network drives, Dropbox, or SharePoint, to name a few.
Microsoft deprecated the /me/drive/sharedWithMe API endpoint.
SUPPORT-7182
Extracting data from SAP BW InfoProvider / ASDO with two values in a WHERE clause returns 0 lines.
Example:
The following is a simple standard ADSO with two infoObjects (‘0BPARTNER’, ‘/BIC/TPGW8001’,) and one field (‘PARTNUM’). All are CHAR data type.
In the scripts we used [PARTNUM] = A, [PARTNUM] = B in the WHERE clause.
However
WHERE clause, it works as expected:From GTDIPCTS2Where ([PARTNUM] = A); [TPGW8001] instead of the field [PARTNUM] in the WHERE clause, it also works as expected:From GTDIPCTS2Where ([TPGW8001] = “1000”, [TPGW8001] = “1000”);
Upgrade to Direct Data Gateway version 1.7.8.
Defect SUPPORT-5101
SUPPORT-5101
When using Google Cloud Pub/Sub as a target and configuring Data Message Publishing to Separate topic for each table, the Pub/Sub topic may be unexpectedly dropped if a DROP TABLE DDL is executed on the source. This occurs even if the Qlik Replicate task’s DDL Handling Policy When source table is dropped is set to Ignore DROP.
This issue has been fixed in the following builds:
To apply the fix, upgrade Qlik Replicate to one of the listed versions or any later release.
A product defect in versions earlier than 2025.5 SP3 causes the Pub/Sub topic to be dropped despite the DDL policy configuration.
A Job design is presented below:
tSetKeystore: set the Kafka truststore file.
tKafkaConnection, tKafkaInput: connect with Kafka Cluster as a Consumer and transmits messages.
However, while running the Job, an error exception occurred under the tKafkaInput component.
org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
Make sure to execute the tSetKeyStore component prior to the Kafka components to enable the Job to locate the certificates required for the Kafka connection. To achieve this, connect the tSetKeystore component to tkafkaConnection using an OnSubjobOK link, as demonstrated below:
For more detailed information on trigger connectors, specifically OnSubjobOK and OnComponentOK, please refer to this KB article: What is the difference between OnSubjobOK and OnComponentOK?.
This article addresses the error encountered during extraction when using log-based incremental replication for MySQL integration:
[main] tap-hp-mysql.sync-strategies.binlog - Fatal Error Occurred - <ColumnName> - decimal SQL type for value type class clojure.core$val is not implemented.
There are two recommended approaches:
Option 1: Enable Commit Order Preservation
Run the following command in your MySQL instance:
SET GLOBAL replica_preserve_commit_order = ON;
Then, reset the affected table(s) through the integration settings.
Option 2: Validate Replication Settings
Ensure that either:
replica_preserve_commit_order (MySQL 8.0+), orslave_preserve_commit_order (older versions)is enabled. These settings maintain commit order on multi-threaded replicas, preventing gaps and inconsistencies.
Run:
SHOW GLOBAL VARIABLES LIKE 'replica_preserve_commit_order';
Expected Output:
Variable_name |
Value |
replica_preserve_commit_order |
ON |
For older versions:
SHOW GLOBAL VARIABLES LIKE 'slave_preserve_commit_order';
For more information, reference MySQL Documentation
replication-features-transaction-inconsistencies | dev.mysql.com
When using log-based incremental replication, Stitch reads changes from MySQL’s binary log (binlog). This error occurs because the source database provides events out of order, which leads to mismatched data types during extraction. In this case, the extraction encounters a decimal SQL type where the value type is unexpected.
Why does this happen?
This article explains whether changing integration credentials or the host address for a database integration requires an integration reset in Stitch. It will also address key differences between key-based incremental replication and log-based incremental replication.
Updating credentials (e.g., username or password) does not require an integration reset. Stitch will continue replicating data from the last saved bookmark values for your tables according to the configured replication method.
Changing the host address is more nuanced and depends on the replication method:
Important:
If the database name changes, Stitch treats it as a new database:
| Change Type | Key-Based Replication | Log-Based Replication |
| Credentials | No reset required | No reset required |
| Host Address | No reset (if search path unchanged) | Reset required |
| Database Name | Reset required | Reset required |
MySQL extraction encounters the following error:
FATAL [main] tap-hp-mysql.main - Fatal Error Occurred - YEAR
YEAR(date_column) < 1 OR YEAR(date_column) > 9999
0000-00-00), adjust SQL mode or replace with valid dates.
This error occurs when the MySQL integration attempts to process a DATE, DATETIME, or TIMESTAMP field containing an invalid year value. Common examples include 0 or any year outside the supported range. The error message typically states "Fatal Error Occurred" followed by details about the invalid year or month value.
The underlying Python library used by the Stitch MySQL integration enforces strict date parsing rules. It only supports years in the range 0001–9999. If the source data contains values less than 0001 or greater than 9999, the extraction will error. This issue often arises from legacy data, zero dates (0000-00-00), or improperly validated application inserts.
Any column selected for replication that contains invalid date values will trigger this error.
Loading Error Across All Destinations
When Stitch tries to insert data into a destination table and encounters a NOT NULL constraint violation, the error message typically looks like:
ERROR: null value in column "xxxxx" of relation "xxxxx" violates not-null constraint
or
ERROR: null value in column "xxxxx" violates not-null constraint
Key Points
_sdc_level_1_id._sdc_level_id columns help form composite keys for nested records and are used to associate child records with their parent. Stitch generates these values sequentially for each unique record._sdc_source_key_[name] columns, they create a unique identifier for each row. Depending on nesting depth, multiple _sdc_level_id columns may exist in subtables.
Recommended Approach
Pause the integration, drop the affected table(s) from the destination, and reset the table from the Stitch UI. If you plan to change the PK on the table, you must either:
If residual data in the destination is blocking the load, manual intervention may be required. Contact Qlik Support if you need assistance clearing this data.
Primary Key constraints enforce both uniqueness and non-nullability. If a null value exists in a PK field, the database rejects the insert because Primary Keys cannot contain nulls.
If you suspect your chosen PK field may contain nulls, you can:
The NetSuite integration encounters the following extraction error:
[main] tap-netsuite.core - Fatal Error Occured - Request failed syncing stream Transaction, more info: :data {:messages ({:code {:value UNEXPECTED_ERROR}, :message "An unexpected error occurred. Error ID: <ID>", :type {:value ERROR}})}
The extraction error message provides limited context beyond the NetSuite Error ID. It is recommend reaching out to NetSuite Support with the Error ID for further elaboration and context.
This error occurs when NetSuite’s API returns an UNEXPECTED_ERROR during pagination while syncing a stream. It typically affects certain records within the requested range and is triggered by problematic records or internal processing issues during large result set pagination.
Potential contributing factors include