Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Starting from Qlik Replicate versions 2024.5 and 2024.11, Microsoft SQL Server 2012 and 2014 are no longer supported. Supported SQL Server versions include 2016, 2017, 2019, and 2022. For up-to-date information, see Support Source Endpoints for your respective version.
Attempting to connect to unsupported versions, both on-premise and cloud, can result in various errors.
Examples of reported Errors:
The system view sys.column_encryption_keys is only available starting from SQL Server 2016. Attempting to query this view on earlier versions results in errors.
Reference: sys.column_encryption_keys (Microsoft Docs)
Upgrade your SQL Server instances to a supported version (2016 or later) to ensure compatibility with Qlik Replicate 2024.5 and above.
00375940, 00376089
By default, Qlik Replicate translates an UPDATE operation on the source into an UPDATE on the target. However, in some scenarios, especially when a primary key column is updated, you may want to capture this change as a DELETE followed by an INSERT.
This behavior can be enabled in Qlik Replicate through a task setting called "DELETE and INSERT when updating a primary key column." For more details, refer to the Qlik Replicate User Guide: Miscellaneous tuning.
Consider the following Oracle source table example, where ID is a primary key and name is a non-primary key column:
This behavior is supported for the following types of targets:
00350953
Uploading a file to a dataset using the Qlik Cloud Analytics Catalog only lists the Personal Space. No other Spaces are available.
This persists even if the user has all the correct Space permissions or owns the missing Shared Space.
Example:
This article provides a comprehensive guide to efficiently install the PostgreSQL ODBC client on Linux for a PostgreSQL target endpoint.
If the PostgreSQL serves as Replicate source endpoint, please check: How to Install PostgreSQL ODBC client on Linux for PostgreSQL Source Endpoint
rpm -ivh postgresql13-libs-13.2-1PGDG.rhel8.x86_64.rpm
rpm -ivh postgresql13-odbc-13.02.0000-1PGDG.rhel8.x86_64.rpm
rpm -ivh postgresql13-13.2-1PGDG.rhel8.x86_64.rpm
export LD_LIBRARY_PATH=/usr/pgsql-13/lib:$LD_LIBRARY_PATH
rpm -ivh unixODBC-2.3.7-1.el8.x86_64.rpm
[PostgreSQL]
Description = ODBC for PostgreSQL
Driver = /usr/lib/psqlodbcw.so
Setup = /usr/lib/libodbcpsqlS.so
Driver64 = /usr/pgsql-13/lib/psqlodbcw.so
Setup64 = /usr/lib64/libodbcpsqlS.so
FileUsage = 1
[pg15]
Driver = /usr/pgsql-13/lib/psqlodbcw.so
Database = targetdb
Servername = <targetDBHostName or IP Address>
Port = 5432
UserName = <PG User Name>
Password = <PG user's Password>
Configuring a SharePoint connection fails when attempting to save the token. The error displayed is:
User: Error getting user info!
From the Qlik Web Connectors stand-alone site:
“Getting user info” is a request to https://graph.microsoft.com/v1.0/me. The endpoint was not reachable.
The following error (C) is shown after successfully creating a Jira Connection string and selecting a Project/key (B) from select data to load (A😞
Failed on attempt 1 to GET. (The remote server returned an error; (404).)
The error occurs when connecting to JIRA server, but not to JIRA Cloud.
Tick the Use legacy search API checkbox. This is switched off by default.
A Use legacy search API option is not present in Qlik Sense On-Premise. To resolve the issue, manually add useLegacySearchAPI='true' in the generated script. This is required when using both Issues and CustomFieldsForIssues tables.
Example:
[Issues]:
LOAD key as [Issues.key],
fields_summary as [Issues.fields_summary];
SELECT key,
fields_summary
FROM Issues
WITH PROPERTIES (
projectIdOrKey='CL',
createdAfter='',
createdBefore='',
updatedAfter='',
updatedBefore='',
customFieldIds='',
jqlQuery='',
maxResults='4',
useLegacySearchApi='true'
);
Connections to JIRA Server use the legacy API.
SUPPORT-3600
By default, the Oracle data type TIMESTAMP(6) WITH TIME ZONE is mapped to VARCHAR(38) in the SQL Server target when using Qlik Replicate. However, in some cases, you may prefer to preserve a more compatible datetime format on the SQL Server side. Below are two workarounds to achieve this:
You can map TIMESTAMP(6) WITH TIME ZONE to DATETIMEOFFSET(6) using the following transformation to trim the input:
substr($TZ, 1, 26)
This transformation will remove the time zone information.
For example, the source value "2025-04-18 14:43:06.000000000 +08:00" will become "2025-04-18 14:43:06.000000".
Without applying this transformation, Qlik Replicate may raise an error:
Invalid character value specified for cast
To retain both the full precision and the time zone, map the Oracle data type to DATETIMEOFFSET(7) and use the following transformation:
substr($TZ, 1, 27) || substr($TZ, 30, 7)
This approach preserves both the 7-digit fractional seconds and the time zone.
For example, the Oracle source value "2025-04-18 14:43:06.000000000 +08:00" will be converted to "2025-04-18 14:43:06.0000000 +08:00" on the SQL Server side.
With more than one data warehouse on your Microsoft Fabric target endpoint, the task may fail to find the target table and will produce the following error:
[TARGET_APPLY ]T: RetCode: SQL_ERROR SqlState: 42S02 NativeError: 208 Message: [Microsoft][ODBC Driver 18 for SQL Server][SQL Server]Invalid object name 'Schema.Table'. Line: 1 Column: -1 [1022502] (ar_odbc_stmt.c:5090)
The task may default to a different data warehouse than the one specified in your MS Fabric endpoint settings, which prevents the task from finding the target tables.
The Internal Parameter additionalConnectionProperties can be applied to the Microsoft Fabric endpoint to ensure the right data warehouse is used.
Set the value to: database=DataWarehouseName
Where DataWarehouseName is the name of the warehouse you are trying to use under the Database name field in your MS Fabric endpoint.
For more information about Internal Parameters and now to set them, see Qlik Replicate: How to set Internal Parameters and what are they for?
This is caused by defect SUPPORT-2305 and affects tasks that use a default data warehouse other than the one specified in your MS Fabric endpoint. Symptoms can be found where this line of information does not match the value filled out in your MS Fabric Database name field:
[TARGET_APPLY ]T: ODBC database name: 'DifferentWarehouseName' (ar_odbc_conn.c:639)
SUPPORT-2305
The following error is thrown when running a Qlik Replicate task without sufficient authorization on the required Function module:
[AT_GLOBAL ]E: java.lang.reflect.UndeclaredThrowableException com.sap.conn.jco.AbapException: (126) ERROR: ERROR Message 001 of class 00 type E, Par[1]: Not authorized to use this Function module java.lang.reflect.UndeclaredThrowableException at com.sun.proxy.$Proxy94.getTableList(Unknown Source)
[METADATA_MANAGE ]E: Failed to list datasets [1024719] (custom_endpoint_metadata.c:242)
[METADATA_MANAGE ]E: Failed to get the capture list from the endpoint [1024719] (metadatamanager.c:4527)
[TABLES_MANAGER ]E: Cannot get captured tables list [1024719] (tasktablesmanager.c:1267)
[TASK_MANAGER ]E: Build tables list failed [1024719] (replicationtask.c:2593)
[TASK_MANAGER ]E: Task 'TEST_2LIS_13_VAITM_DELTA' failed [1024719] (replicationtask.c:4020)
Grant the necessary authorizations for /QTQVC/RFC to the communication user.
The Qlik Replicate user (specifically the communication user) lacks authorization to execute function modules under /QTQVC/RFC. These modules are essential for the replication process.
The missing role provides the necessary permissions to run these function modules, which are used by Qlik Replicate to fetch metadata and extract data through 2LIS_* extractors.
How can we get detailed table-level DML profiling data from Qlik Replicate?
Table-level DML profiling data can be retrieved by checking the Store change option when creating a Qlik Replicate task. See Store Changes Settings for details.
Once set, DML data will be saved in the target DB's <target_table>__ct table. DML statistics data can then be profiled from this table using customized SQL queries.
This article explains how to avoid the "Failed to create search index" script error during reload.
A data load may fail with:
An error occurred
Internal engine error.
The script log will show:
2017-02-28 17:17:55 Creating search index
2017-02-28 17:17:55 Failed to create search index
2017-02-28 17:17:55 Execution finished.
The cause is currently unknown. Suspected to be an App corruption caused by external influences.
Insert "Set CreateSearchIndexOnReload=0" in the load script.
Even if this is disabled during the reload, the search index will be created later after the first search request of users.
Increase the timeout value.
Rather than disabling or removing the Search Index, you can attempt to increase the timeout before the reload will error out.
To do so, you will need to modify the Qlik Sense engine settings.ini and add a customized timeout value.
After upgrading Qlik Replicate to version 2024.11, testing the connection for a Microsoft Azure ADLS Gen2 target endpoint causes the Qlik Replicate services to crash.
The Windows Event Viewer logs the following error:
Faulting application name: repctl.exe, version: 2024.11.0.177, time stamp: 0x672b5c2a
Faulting module name: j9vm29.dll, version: 11.0.17.0, time stamp: 0x63600655
Exception code: 0xc0000005
Fault offset: 0x00000000000d780c
Faulting process id: 0x1ba4
Faulting application start time: 0x01db6c896871a7dd
Faulting application path: E:\Program Files\Attunity\Replicate\bin\repctl.exe
Faulting module path: E:\Program Files\Attunity\Replicate\jvm\bin\default\j9vm29.dll
Report Id: 3a00eb98-08d6-4c3a-a9cd-7ac48e0debe0
Faulting package full name:
Faulting package-relative application ID:
The repsrv.log records the errors:
V: load_dll: <- (at_loader.c:209)
E: An exception occurred!!! (win32_exception_handler.c:109)
E: Backtrace at exception: !{E:\Program Files\Attunity\Replicate\bin\at_base.dll!462bdb,E:\Program Files\Attunity\Replicate\bin\at_base.dll!37fde9,E:\Program Files\Attunity\Replicate\bin\at_base.dll!675af1,...,C:\windows\System32\KERNEL32.DLL!84d4,C:\windows\SYSTEM32
tdll.dll!51a11,}! (win32_exception_handler.c:110)
E: exception code is 3221225477 (win32_exception_handler.c:112)
E: tid=8152 (win32_exception_handler.c:115)
E: exception as string is EXCEPTION_ACCESS_VIOLATION (win32_exception_handler.c:118)
E: for more details about win32 exceptions, look at http://msdn.microsoft.com/en-us/library/aa908962.aspx (win32_exception_handler.c:121)
E: exception record (nest level = 0): (win32_exception_handler.c:42)
E: exception code: 3221225477 (win32_exception_handler.c:44)
E: exception flags: 0 (win32_exc)
Performing a clean reinstallation of Qlik Replicate resolves the issue by restoring the correct JVM components and DLLs:
The upgrade did not properly clean up the j9vm folder and related DLL files under the Attunity\Replicate\jvm directory, leading to conflicts during runtime.
Loading data from Oracle may fail on a full load with the error:
ORA-01555: snapshot too old: rollback segment number string with name "string" too small
This is an Oracle configuration issue which must be resolved for the task to be able to continue.
In Automatic Undo Management mode, increase the setting of UNDO_RETENTION. Otherwise, use larger rollback segments.
You can verify your current settings:
SHO PARAMETER UNDO;
SELECT SUM(BYTES)/1024/1024 "MB", TABLESPACE_NAME FROM DBA_FREE_SPACE GROUP BY TABLESPACE_NAME
Verify how large the problematic table is and what the current settings are. Then increase the sizes as per your findings.
This issue mainly happens during the full load phase of a task, or upon reload of a subset of tables. Another way you can minimize the number of reads from rollback segments is to use the parallel load feature, which splits the select statements for the table(s) into smaller chunks. The trade off isthat this will generate more connections on the source database, consuming CPU and memory: Parallel Load | Qlik Replicate Help
Oracle references:
ORA-01555 - Database Error Messages
ORA-01555 "Snapshot too old" - Detailed Explanation
snapshot too old error
It caused by rollback records needed by a reader being overwritten by other writers.
This Techspert Talks session will address:
Chapters:
Resources:
Following an upgrade of Qlik Replicate from version 2023.05 to 2024.05 and an upgrade of the Databricks ODBC drivers from version 2.6.22 to 2.8.2, the following error is encountered when configuring and testing the Databricks endpoint:
SYS-E-HTTPFAIL, Failed prepare Cloud component. SYS,GENERAL_EXCEPTION,Failed prepare Cloud component,Cannot connect to Cloud server RetCode: SQL_ERROR SqlState: HY000 NativeError: 14 Message: [Simba][ThriftExtension] (14) Unexpected response from server during a HTTP connection: SSL_connect: certificate verify failed. Failed to find field. Field named at object
If you are using your organization's root certificates with the ODBC driver 2.8.2, then you should add the root certificates to the cacerts.pem file, which is located in the ODBC driver directory.
For more information, see: Magnitude Simba Apache Spark ODBC Data Connector (.pdf download).
If this option is not set, the connector defaults to using the trusted CA certificates .pem file installed by the connector. To use the trusted CA certificates in the .pem file, set the UseSystemTrustStore property to 0 or clear the Use System Trust Store check box in the SSL Options dialog.
Timestamp values may be written to Kafka as " -1 “. How can this be resolved?
To avoid the negative timestamps, add the Internal Parameter rdkafkaProperties to the endpoint connection, using the value: api.version.request=true;api.version.fallback.ms=0;
More than one value can be added. If you have previously added the rdkafkaProperties parameter and have an active value, follow the current value with a semicolon (;) before appending the new one.For more information on Internal Parameters, see Qlik Replicate: How to set Internal Parameters and what are they for?
Example: OLD_VALUE;NEW_VALUE
A Qlik application has been successfully reloaded in a tenant. The reload has stored additional tables in a QVD.
Reviewing the Dataset (QVD) in the Catalog does not show the correct number of rows after the reload. The information is not automatically updated.
The rows only update once the Compute button is clicked.
This is currently working as expected.
Qlik plans to provide scheduling capabilities for the Profile and Data Quality compute. No estimated release date or other details can yet be determined for this feature.
Profiling information is not automatically refreshed when QVD files change.
SUPPORT-2319
With a filter and a data quality cleansing rule defined on the same column, the data warehouse task fails with the following error:
[ERROR ] [] sqlstate '42000', errorcode '1003', message 'SQL compilation error:
syntax error line 404 at position 24 unexpected 'WHEN'.' java.sql.SQLException: sqlstate '42000', errorcode '1003', message 'SQL compilation error:
syntax error line 404 at position 24 unexpected 'WHEN'.'
This issue was caused by defect RECOB-6917.
Contact Support to obtain the relevant fixes.
RECOB-6917
Connecting to data using a web file connection may fail with the following error if the connection is routed to an internet proxy.
The additional configuration will be required in the Settings.ini file.
[Settings 7]
UseProxyServerForWebFileConnectors=1
WebFileConnectorProxyServer=xxxx.xxxx.com (Replace this with your actual proxy URL)
WebFileConnectorProxyPort=8080(Replace with your proxy's port number)
The following error is thrown when running an ETL task on a specific table in Qlik Compose:
ETL task aborted - Unexpected Primary Key violations detected in 'Table Name'.
Enable Handle Duplicates on the problematic table.
Troubleshooting steps taken:
If the lookup tables contain duplicate records, the query will insert those duplicates into the staging table, leading to the PK violation errors.