Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Search our knowledge base, curated by global Support, for answers ranging from account questions to troubleshooting error messages.
Qlik Replicate tasks using Oracle as a Source Endpoint fail after installing the Oracle July 2024 patch.
All Qlik Replicate versions older than the 2024.5 SP03 release are affected.
Upgrade to Qlik Replicate 2024.5 SP03 or later once available.
In the meantime, Qlik has made early builds of 2023.11 and 2024.5 available.
Download the early builds here:
2023.11 SP05: https://files.qlik.com/url/ndafzunah2srntqt
password: bl3xrefv
2024.5 SP03: https://files.qlik.com/url/fbfsznjidxt5nzra
password: cygie73l
The Oracle July 2024 patch introduced a change to redo events. Qlik has since provided a fix for Qlik Replicate which parses the redo log correctly.
RECOB-8698
Oracle Database 19c Release Update July 2024 Known Issues
As a general reminder, all changes to the environment such as operating system patches, endpoint and driver patches, etc. should be tested in lower environments before promoting to production.
Snowflake supports using key pair authentication for enhanced authentication security as an alternative to basic authentication (i.e. username and password). This article covers end-to-end setup for Key Pair Authentication in Snowflake and Qlik Replicate.
This authentication method requires, at minimum, a 2048-bit RSA key pair. You can generate the Privacy Enhanced Mail (i.e. PEM) private-public key pair using OpenSSL.
Qlik Replicate will use the ODBC driver to connect snowflake and ODBC is one of the supported clients which will support key pair authentication.
Let's assume, you decided to use key pair authentication for the Snowflake user which is used in Qlik Replicate to connect to Snowflake. You have to follow the below process to convert user authentication from basic to key pair.
You can generate either an encrypted version of the private key or an unencrypted version of the private key.
To generate an unencrypted version use the following command in the command prompt:
$ openssl genrsa 2048|openssl pkcs8 -topk8 -inform PEM -out rsa_key.p8 -nocrypt
To generate an encrypted version (which omits -nocrypt) use:
$ openssl genrsa 2048|openssl pkcs8 -topk8 -v2 des3 -inform PEM -out rsa_key.p8
In our example case, we generate an encrypted version of a private key.
We:
This generates a private key in PEM format:
From the command line, we generate the public key by referencing the private key. The following command assumes the private key is encrypted and contained in the file named rsa_key.p8.
When it requests a passphrase, use the same password that we generated in step 1.
openssl rsa -in rsa_key.p8 -pubout -out rsa_key.pub
This command generates the public key in PEM format:
Copy the public and private key files to a local directory for storage and record the path to the files. Note that the private key is stored using the PKCS#8 (Public Key Cryptography Standards) format and is encrypted using the passphrase you specified in the previous step.
However, the file should still be protected from unauthorized access using the file permission mechanism provided by your operating system. It is your responsibility to secure the file when it is not being used.
Describe the user to see current information. We can see that there is no public key assigned to the HDW user. Therefore, the user needs to use basic authentication.
Execute an ALTER USER command to assign the public key to a Snowflake user.
Execute a DESCRIBE USER command to verify the user’s public key.
Qlik Replicate
Snowflake Target
This article provides instructions on how to measure data transfer speeds when working with Qlik Data Gateway - Direct Access.
The information in this article is provided as-is and will be used at your discretion. Depending on the tool(s) used, customization(s), and/or other factors, ongoing support on the solution below may not be provided by Qlik Support.
.\Create-Dummy-Data-File -Size 1024
When using Qlik GeoAnalytics Plus or Qlik GeoAnalytics server, loading a file (Geojson, shapefile or similar) from a network shared folder fails with the error message:
java.io.IOException: java.lang.IllegalArgumentException: URI has an authority component.
Example:
In this example, the shapefile test.zip is located in \\dc1\MyData\testShapeFile folder.
-----------extract from Qlik GeoAnalytics Plus log
2024-09-07T14:42:10,001 ERROR - Failed to process request com.idevio.geoanalytics.b.m: Failed to create dataset Dataset:
Failed to load 'file://dc1/MyData/testShapeFile/test.zip': java.io.IOException: java.lang.IllegalArgumentException: URI has an authority component
at com.idevio.geoanalytics.c.e.a(Unknown Source) ~[geo-operations.jar:?]
at com.idevio.geoanalytics.c.e.a(Unknown Source) ~[geo-operations.jar:?]
at com.idevio.geoanalytics.c.h.a(Unknown Source) ~[geo-operations.jar:?]
at com.idevio.geoanalytics.d.a(Unknown Source) [geo-operations.jar:?]
at com.idevio.geoanalytics.d.run(Unknown Source) [geo-operations.jar:?]
at java.base/java.lang.Thread.run(Thread.java:842) [?:?]
--------------
Moving the shapefile test.zip to a local location, such as C:\MyData\testShapeFile, the load succeeds:
A fix for QB-25310 is expected to be included in the next possible release of Qlik GeoAnalytics. Review the available release notes for details.
The defect is in the geo-operations component, which is also used for the geo-operations service.
Only GeoAnalytics on-premise versions are affected.
Existing limitations of Microsoft SQL Server (MS-CDC) as a source for Qlik Replicate are documented in MS-CDC Limitations and considerations (help.qlik.com).
This article aims to provide additional context to some of the listed limitations.
Question: The AR_H_USER header column is not supported (link). Does it mean all AR_H_USER headers can not be used when using MS-CDC for replication?
Answer: AR_H_USER is a header, the limitation means this header is not available to be used in the target table when Using MS-CDC for replication.
Question: MS-CDC Change Tables with fixed size columns (including NCHAR and CHAR data), the sum of which exceeds 8060 bytes, are not supported. (link) Are varchar(max) columns included in this limitation?
Answer: Varchar(max) columns are not included in the 8060bytes limitation for fixed-size columns.
Question: RENAME TABLE will not be captured and Table-level DDLs are not supported. (link) Does it mean when Qlik Replicate receives DDL changes it will work for the first DDL statement and the task will stop and we have to rename the CT table manually?
Answer: All DDL changes including RENAME TABLE, alter column, etc. are not supported. The table will be suspended. The quickest workaround is to disable MS-CDC on the table, drop the CT table and re-enable MS-CDC. If you need to keep the ct table history, the other workaround is to manually fix ct table to reflect the DDL changes.
"RestConnectorMasterTable" General Script Error in statement handling
RestConnectorMasterTable:
20200826T102106.344+0000 0088 SQL SELECT
20200826T102106.344+0000 0089 "name",
20200826T102106.344+0000 0090 "value"
20200826T102106.344+0000 0091 FROM JSON (wrap off) "contactCustomData"
20200826T102106.344+0000 0092 WITH CONNECTION (
20200826T102106.344+0000 0093 URL " ",
20200826T102106.344+0000 0094 HTTPHEADER "Authorization" "**Token removed for security purpose**"
20200826T102106.344+0000 0095 )
20200826T102106.967+0000 General Script Error in statement handling
20200826T102106.982+0000 Execution Failed
20200826T102106.986+0000 Execution finished.
To be able to catch the exact error and mitigate the issue they need to apply our recommended best practices for error handling in Qlik scripting using the Error variables
Error variables
Script control statements
Set to ErrorMode=0 it will ignore any errors and continue with the script. You can use the IF statement to retry the connection or move to another connection for a few attempts and then it will change it to ErrorMode=1 and fail or just disconnect on its own.
A sample script is located here, but further options can be added from the Help links already provided.
Qlik-Sense-fail-and-retry-connection-sample-script
Note: QlikView scripting is the same in these functions for Qlik Sense unless otherwise stated, but there are some very helpful items in the links.
Best-Practice-Error-Handling-in-Script
Error with the fetch of the token with the rest call. If the number of rows in a table doesn't match or is less than expected, trigger the script to throw an error and have it try to load the table again for more records, or if the count is off, do a Loop until returns the correct number.
QB-3164
By default, Qlik Replicate control tables like attrep_apply_exceptions or attrep_status are created in lower case. If you would like for those tables to be created in UPPERCASE on Snowflake as a target, please see below.
Starting in version 2021.5 service pack 02, there is an internal parameter called setIgnoreCaseFlag which if checked or set to true, will set the QUOTED_IDENTIFIERS_IGNORE_CASE to true.
Thus, letters in double-quoted identifiers are stored and resolved as uppercase letters.
For reference: https://docs.snowflake.com/en/sql-reference/parameters.html#quoted-identifiers-ignore-case
You can search for setIgnoreCaseFlag in the Advanced tab -> Internal Parameters for the Snowflake Target endpoint.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
Preparing a storage task fails with the following error:
Failed to prepare Data Task
Job finished with error. SQL compilation error: Object already exists as TABLE
The error will only occur if a task name includes Japanese Katakana characters. It does not fail when using English as the preferred language; only when using Japanese.
This issue was caused by RECOB-8643 and has been resolved in the Qlik Data Gateway version 2024.5.7, along with any later releases.
See the Release Notes for details.
RECOB-8643
The internal parameter skipMscdcJobFitnessCheck implies that, if enabled, it will prevent or (skip) the Fitness Check from running. This is incorrect.
skipMscdcJobFitnessCheck controls the cdcJob portion of the Fitness Check. The Fitness Check itself will always run and cannot be stopped. Only the cdcJob check is skipped.
This means that task log entries such as "Failed in MS-CDC fitness check" will not be addressed by enabling skipMscdcJobFitnessCheck.
Qlik Replicate
SQL Server MS-CDC
Qlik Replicate: MSSQL-CDC source endpoint to Snowflake. We are encountering the following error: Failed in MS-CDC fitness check
Qlik Replicate: How to set Internal Parameters and what are they for?
In this example, we load data from an Excel file hosted on SharePoint 365 using the Qlik Office 365 Sharepoint connector.
Environment:
For our example, we are using Qlik Sense Enterprise on Windows and the installed Qlik Web Connector for Office 365 Sharepoint. For more information on the Office 365 Sharepoint Qlik Web Connectors and for installation instructions, see: Office 365 Sharepoint and Installation Web Connectors.
Links provided in these examples are example links, not real links.
After an upgrade to Qlik Replicate 2024.5, capturing changes (CDC) for a Table defined on a DB2i Source leads to conversion errors with the NUMERIC Datatypes with precision and scale greater than zero.
Due to a change in Qlik Replicate's new version, this error causes the table defined in the task to have data issues with the value in the target showing 0 and not the correct value present in the Source.
Example: 100.000 (source) becomes 0 (target)
To verify the table defined in the task, have the DDL for the table confirmed, then check locate the below details in the DDL has columns with NUMERIC(15,5) output from iSeries, table in the DB2i Source Environment.
Example:
The number definitions impacted have a SCALE > 0.
So in SQL DDL for the Table has Numeric(5,2), a value like 123.45 will fail conversion and end up as 0.
Upgrade to Qlik Replicate 2024.5 SP03 or higher.
A regression of the 2024.5 version of Qlik Replicate, affecting how the Datatype is parsed from the Journal/Receiver to the Target.
Selecting records using a specified record format name (RCDFMT parameter)
RECOB-8827
Creating an OLE DB connection in Qlik Sense on-prem fails at the Test connection step with:
Test failed
To verify the connection is successful, first confirm the connection works using a UDL file. For step-by-step instructions refer to Test SQL database connectivity with test.udl file. Once you receive a Test connection succeeded with the UDL file, close all the prompts and open the file with a text editor. This file contains the new connection string.
For the test in our example, the only extra parameter enabled in the connection string that is not possible to put in the connection UI is Trust Server Certificate=True. All other values are set to empty or null and we exclude them.
With that information, you can now create a connection with the Engine API explorer:
Note: Make sure to keep the OLEDB CONNECT TO and the brackets in the connection string value.
JSON:
{
"handle": 1,
"method": "CreateConnection",
"params": {
"qConnection": {
"qId": "",
"qName": "OLEDBSQL",
"qConnectionString": "OLEDB CONNECT TO [Provider=MSOLEDBSQL19.1;Persist Security Info=False;Data Source=ServerNameorIP;Trust Server Certificate=True;]",
"qType": "OLEDB",
"qUserName": "user",
"qPassword": "Password",
"qModifiedDate": "",
"qMeta": {
"qName": ""
},
"qLogOn": 0
}
}
}
Limited options to add extra parameters to the OLE DB connection through the UI. The connection string field is only available in "Edit" mode, not in creation mode.
Test SQL database connectivity with test.udl file
Create a connection Method
When ASM is used for an Oracle source endpoint, you should not be using a direct IP address to connect to the ASM.
Attempting to add multiple IP address connection strings into the ASM connection string field may lead to a connection failure with the following error:
SYS,GENERAL_EXCEPTION,Incorrect connection string for the thread 10
The TNS connection string of the ASM node cluster should be used for the ASM connection.
Correct Example:
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=asmhostname)(PORT=1531))(CONNECT_DATA=(SERVICE_NAME=+ASM)))
Incorrect Example:
10.89.145.143:3100/+ASM,10.89.134.144:3100/+ASM
The correct format of the ASM connection string should be used when using ASM with a cluster format.
As a part of a Qlik Replicate upgrade to 2024.5, SAP HANA Source Endpoints using Version 1 configuration are converted to Version 3 CTS Mode.
When the V1 task is resumed, Qlik Replicate will create the attrep_cdc_changes_cts table and add "V3" triggers to all tables in the task. All changes to the tables between the time that the task was stopped (before the upgrade) and the time the task was resumed (after the upgrade) will be in the attrep_cdc_changes table and ignored by 2024.5.
As a workaround, the Source Endpoint can be migrated before upgrading to 2024.5 and set to use the Log Table configuration (located in the Advanced Tab). This will ensure that no data loss occurs during the upgrade to the new version. During this time, you would be able to keep running the existing triggers as needed before the switch to the recommended Version 3 CTS Mode triggers.
For more information on how to convert from Version 1 to Version 3, see Qlik Replicate Trigger SAP HANA conversion from Version 1 to Version 2 or Version 3.
QB-28825
When attempting to connect to the Sharepoint 365 Qlik Web Connector in the Script Editor the following error is shown "Internal Server Error."
The URL can be accessed without issue outside of Qlik by other applications. The necessary ports have been opened and Server connectivity is stable.
This article is intended for the stand-alone installation of Qlik Web Connectors, rather than built in connectors. See Install connectors separately.
Open the deploy.config located in the WebConnector folder
Locate DefaultAllowed IPAddresses and insert "Any" as seen below:
<DefaultAllowedIpAddresses>any</DefaultAllowedIpAddresses>
Locate AllowRemoteAccess and insert "true" as seen below:
<AllowRemoteAccess>true</AllowRemoteAccess>
The <AllowRemoteAccesss> set to true allows the following setting to be seen when selecting
My Settings in the Web Connector UI:
Add Allowed IP Addresses : [any] in the Web Connector UI as seen above
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
This article provides a comprehensive guide to efficiently install the PostgreSQL ODBC client on Linux for a PostgreSQL target endpoint.
If the PostgreSQL serves as Replicate source endpoint, please check: How to Install PostgreSQL ODBC client on Linux for PostgreSQL Source Endpoint
rpm -ivh postgresql13-libs-13.2-1PGDG.rhel8.x86_64.rpm
rpm -ivh postgresql13-odbc-13.02.0000-1PGDG.rhel8.x86_64.rpm
rpm -ivh postgresql13-13.2-1PGDG.rhel8.x86_64.rpm
export LD_LIBRARY_PATH=/usr/pgsql-13/lib:$LD_LIBRARY_PATH
rpm -ivh unixODBC-2.3.7-1.el8.x86_64.rpm
[PostgreSQL]
Description = ODBC for PostgreSQL
Driver = /usr/lib/psqlodbcw.so
Setup = /usr/lib/libodbcpsqlS.so
Driver64 = /usr/pgsql-13/lib/psqlodbcw.so
Setup64 = /usr/lib64/libodbcpsqlS.so
FileUsage = 1
[pg15]
Driver = /usr/pgsql-13/lib/psqlodbcw.so
Database = targetdb
Servername = <targetDBHostName or IP Address>
Port = 5432
UserName = <PG User Name>
Password = <PG user's Password>
The option Apply Exceptions (b) is not enabled in the Control Tables (a) menu. This setting is typically enabled by default:
Further troubleshooting reveals the option Apply batched changes to multiple tables concurrently (b) has been enabled in the Change Processing Tuning (a) options:
This setting affects the option to set a Global Error Handling policy, disabling it, and resets task-specific error handling options to their default settings.
This caused the Apply Exceptions check box to be unticked.
Disable Apply batched changes to multiple tables concurrently and enable the Apply Exceptions table.
Qlik Replicate
Snowflake on Azure
When working with the IBM DB2 for z/OS source endpoint, the trailing spaces are trimmed in CHAR datatype columns, especially when the Log Stream is involved in the tasks. The complete replication is from DB2z to Log Stream, then from Log Stream to S3 target endpoint.
To address this issue, add the internal parameter keepCharTrailingSpaces and set its value to TRUE.
This step has to be taken on these task endpoints:
1: Log Streaming task (parent task)
2: Replication task (child task)
Steps:
#00146985
Qlik ODBC connector package (database connector built-in Qlik Sense) fails to reload with error Connector reply error:
Executing non-SELECT queries is disabled. Please contact your system administrator to enable it.
The issue is observed when the query following SQL keyword is not SELECT, but another statement like INSERT, UPDATE, WITH .. AS or stored procedure call.
See the Qlik Sense February 2019 Release Notes for details on item QVXODBC-1406.
By default, non-SELECT queries are disabled in the Qlik ODBC Connector Package and users will get an error message indicating this if the query is present in the load script. In order to enable non-SELECT queries, allow-nonselect-queries setting should be set to True by the Qlik administrator.
To enable non-SELECT queries:
As we are modifying the configuration files, these files will be overwritten during an upgrade and will need to be made again.
Only apply !EXECUTE_NON_SELECT_QUERY if you use the default connector settings (such as bulk reader enabled and reading strategy "connector"). Applying !EXECUTE_NON_SELECT_QUERY to non-default settings may lead to unexpected reload results and/or error messages.
More details are documented in the Qlik ODBC Connector package help site.
Feature Request Delivered: Executing non-SELECT queries with Qlik Sense Business
Execute SQL Set statements or Non Select Queries
Qlik Replicate can use SAP HANA as the Backend Database, which you can define on the SAP Application (DB) Source Endpoint setting with Triggers in SAP HANA.
PM-13722