Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Search our knowledge base, curated by global Support, for answers ranging from account questions to troubleshooting error messages.
LinkedIn requires OAuth2 authentication.
The connection steps are as follows:
Setting up selected REST sources for data loading
A Qlik Replicate Full Load Task fails with the error:
RetCode: SQL_ERROR SqlState: 23000 NativeError: 301 Message: [SAP AG][LIBODBCHDB DLL][HDBODBC] General error;301 unique constraint violated
Add the Internal Parameter named readSnapshotOnUnload. This resolves the issue of duplicated data during the Full Load / CDC Task.
Jira QB-27017
Despite defining a filter in the table setting for the full load, the "Transferred Count" column in Qlik Replicate Console GUI indicates that all records are transferred from the source table to the target side. The filter criteria have been thoroughly reviewed and no issues have been identified. It appears that the Replicate process is not filtering the records as expected.
It is important to understand how the filter works in the Qlik Replicate process:
Filter Conditions: The filter statement is added directly to the query that retrieves data from the source table.
For example:
Qlik Replicate performs following query to retrieve records.
[SOURCE_UNLOAD]T: SELECT [id],[c1],[ts] FROM [dbo].[des2] WHERE ([id] = 1)
In this case, only the record with id = 1 would be returned from the source table, and the "Transferred Count" would show 1.
Record Selection Condition: Qlik Replicate retrieves all records from the source table and then performs the filtering within the Qlik Replicate process itself.
For example,
Qlik Replicate performs following query to retrieve records.
[SOURCE_UNLOAD ]T: SELECT [id],[c1],[ts] FROM [dbo].[des2]
Since all records are initially collected, the "Transferred Count" will show the total number of records (e.g. 200 for this example) from the source table.
Only one record is transferred to the target. Other records are skipped:
[TARGET_LOAD ]I: Load finished for table 'dbo'.'des2' (Id = 1). 200 rows received. 199 rows skipped. Volume transferred 80000.
The Oracle Redo compatibility version 0b200100 is not supported
Upgrade to a supported Oracle version.
While planning the upgrade, a workaround can be applied.
This workaround must not be used as a permanent solution.
An unsupported version of Oracle, such as 11.2.0.1. Verify the supported version: Supported source endpoints | Oracle.
! Note: Do NOT modify the NPrinting Database for any reason using PG Admin postres query or any other execution tools as this will damage your NPrinting Deployment and prevent successful NPrinting Database backup and restore operations.
! Note: Do NOT restore an older version of an NPrinting Database to a New NPrinting server or restore a newer version of the NPrinting database to an older version of NPrinting Server.
Examples:
These rules apply to general releases and service releases: The point version of the NPrinting Database being restored must match the point version of NPrinting Server being restored to (see Backup and restore Qlik NPrinting).
! Note: From NPrinting February 2020 and later versions, it is NOT necessary to enter a superuser database password
! Note: If you are making a backup for the Qlik Support team, please add the following NPrinting user information so that we can log onto the NPrinting Web Console following the local restore of the database (also ensure that NPrinting Authentication is enabled. Go to Admin > Settings > Authentication
This procedure is meant to backup and restore (partial backup and restore of these individual items is not possible*):
NP Web Console Items:
NP Backup zip File Contents (do NOT open and modify the contents of this file):
NOTE:
c:\nprintingbackups
Before Proceeding: Please log on as the NPrinting service account used to run the NPrinting Web Engine and Scheduler services before proceeding
Open the Windows Service Manager (services.msc), and stop the following services (by right-clicking them, and then clicking Stop). This will ensure any manual or scheduled NPrinting Publish Tasks are not executed during the backup or restore process:
C:\NPrintingBackups
Do NOT modify any syntax or add any additional unnecessary spaces
Open the command prompt making sure to run cmd.exe as Administrator and change directory as follows:
cd C:\Program Files\NPrintingServer\Tools\Manager
Qlik.Nprinting.Manager.exe backup -f C:\NPrintingBackups\NP_Backup.zip -p "C:\Program Files\NPrintingServer\pgsql\bin" --pg-password YourSuperuserDBpasswordHere
or with Current Supported versions of NPrinting (no password required)
Qlik.Nprinting.Manager.exe backup -f C:\NPrintingBackups\NP_Backup.zip -p "C:\Program Files\NPrintingServer\pgsql\bin"
Qlik.Nprinting.Manager.exe restore -f C:\NPrintingBackups\NP_Backup.zip -p "C:\Program Files\NPrintingServer\pgsql\bin" --pg-password YourSuperuserDBpasswordHere
Qlik.Nprinting.Manager.exe restore -f C:\NPrintingBackups\NP_Backup.zip -p "C:\Program Files\NPrintingServer\pgsql\bin"
File C:\Users\domainuser\AppData\Local\Temp\2\nprintingrestore_20201203082300\files\xxxxxxxxxxxxxxxxxxxxxxxxxxxx does not exist in the source backup package.
! Note: If re-installing on existing or restoring to a different NPrinting server environment, ensure that the destination NPrinting server license is enabled/activated before restoring the NP database.
NPrinting Engine:
NP Connections:
Qlik Sense Certificates (if using NPrinting Qlik Sense connections)
C:\Program Files\NPrintingServer\Settings\SenseCertificates
Other helpful information about the NP Backup and Restore tool and process:
C:\ProgramData\NPrinting
Note:The pre and post upgrade backup files are appended with the NP version number and backup dateC:\ProgramData\nprinting\logs\nprinting_manager.log
*NOTE:
When working with Oracle source endpoint, permissions are required for Change Data Capture (CDC) tasks by default. However, in some scenarios, we only need Full Load (FL) tasks, which require fewer permissions.
The minimum permissions required for a Qlik Replicate Full Load ONLY task are as follows. In this example, the Oracle account name is "FLONLYUSER":
GRANT SELECT ANY TABLE TO FLONLYUSER;
GRANT CREATE SESSION TO FLONLYUSER;
GRANT SELECT ON V_$PARAMETER TO FLONLYUSER;
In some environments, DBAs may be unwilling to grant V_$PARAMETER permission. For such cases, refer to the Alternative Resolution.
Please ensure to disable the advanced option "Automatically add supplemental logging" in Oracle source endpoint, as it is unnecessary for a Full Load ONLY task.
#00154133
A Qlik Replicate full load ONLY task reports the following error/warning messages when loading data from an Oracle source database:
[METADATA_MANAGE ]E: ORA-00942: table or view does not exist [1020416]
[METADATA_MANAGE ]W: Cannot execute statement 'select value from v$parameter where name='enable_goldengate_replication''
This issue typically arises because the Oracle account used by Qlik Replicate lacks the necessary permissions to access the v$parameter view.
Required Permission:
To resolve this, the following permission needs to be granted to the Oracle account. In below sample the account name is "FLONLYUSER":
GRANT SELECT ON V_$PARAMETER TO FLONLYUSER;
In some environments, DBAs may be unwilling to grant this permission. While the full load task completes successfully, the error/warning messages persist without the permission.
To resolve these messages, you can create a table under the Oracle account as follows. This example uses the "FLONLYUSER" account to connect to the Oracle source database:
CREATE TABLE FLONLYUSER.V$PARAMETER AS SELECT * FROM V$PARAMETER;
Here:
Explanation: When a table and a system view share the same name within the same account, the table takes precedence. Thus, Qlik Replicate will query the FLONLYUSER.v$parameter table instead of the v$parameter system view, eliminating the error/warning messages.
#00154133
In Qlik Replicate when manipulating datetime data on the transformation page, the microsecond precision is truncated to the first three digits after the transformation is performed.
Qlik Replicate utilizes SQLite syntax, which is a limitation since SQLite only supports fractional seconds up to SS.SSS. If we perform the same syntax in an SQLite command prompt, we would see the same result.
sqlite> select strftime('%Y-%m-%d %H:%M:%f','2024-06-27 12:00:00.123456','+8 hours');
2024-06-27 20:00:00.123
After manipulating the datetime, you can treat it as a string and concatenate the microseconds using a combination of SQLite functions, like this:
substr(strftime('%Y-%m-%d %H:%M:%f', $TS, '+8 hours'), 1, 19) || substr(strftime('%Y-%m-%d %H:%M:%f', $TS), 20)
When using Qlik REST connector to load data from API sources that return JSON or XML response, sometimes a huge data model with many tables is returned although the response contains only a single table.
For example, the following JSON message is parsed into 5 single-row tables instead of one table with 5 rows:
Environment:
The JSON response has a "nested" structure, i.e each data record is stored as a separated JSON object, which is not member of an array.
Qlik REST connector applies a standard strategy for parsing JSON response:
As a result, nested JSON response is parsed as multiple tables instead of a single flat table. You can find the same strategy in other JSON parsing tool, such as http://json2table.com/.
The same logic applies to nested XML responses.
Qlik REST connector does not yet support loading JSON/ XML responses with nested structure. There is an ongoing improvement request to support this structure in future releases of the connector.
Meanwhile, please consider the following work-around solution:
When changing the name of a dataset, the source name still stays the same. This can be seen by uploading a “firstname.qvd" and renaming it to “secondname.qvd”.
The dataset's detail will show "firstname.qvd" as the source.
As a consequence:
Trying to load “FROM [lib://DataFiles/secondname.qvd]” will produce a "(Connector error: File not found)" error.
This is not a defect, it's how the product is designed.
A dataset represents a data resource with its properties such as name. The value of that is that you are able to use more user-friendly names of datasets without having to change the source names, which can be useful when the dataset is pointing to a database table for instance.
In the future, there is a plan to add the possibility of calling the name of the datasets in the script.
When extracting data from AWS DynamoDB using ODBC connector, the following IAM policy can be configured for an IAM user and thus, pull data from only one table into Qlik i.e. viewing the expected table in "Select data" UI instead of providing access to all AWS DynamoDB database tables.
{
"Statement": [
{
"Action": [
"dynamodb:Scan",
"dynamodb:ListTables",
"dynamodb:DescribeTable"
],
"Effect": "Allow",
"Resource": "arn:aws:dynamodb:eu-west-2:*:table/my-table-name",
"Sid": "EcrDynamoDBReadAccess"
}
],
"Version": "2012-10-17"
}
However, with above IAM policy, select data is empty and the following error is displayed
error "(Connector error: ERROR [42S02] [Simba][SQLEngine] (31740) Table or view not found: ..my-table-name)"
When given access through DynamoDB UI console or AWS API, the above IAM policy is valid from AWS DynamoDB perspective. But, ODBC and JDBC drivers need access to fetch metadata, hence ListTables access is required. As ListTables is just read access, it should not be an issue.
Use the following reviewed IAM Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListTablesAccess",
"Effect": "Allow",
"Action": [
"dynamodb:ListTables"
],
"Resource": "*"
},
{
"Sid": "SpecificTable",
"Effect": "Allow",
"Action": [
"dynamodb:DescribeTable",
"dynamodb:Get*",
"dynamodb:Query",
"dynamodb:Scan"
],
"Resource": "arn:aws:dynamodb:*:*:table/<MyTable>"
}
]
}
Information provided on this defect is given as is at the time of documenting. For up to date information, please review the most recent Release Notes, or contact support with the ID QB-27013 for reference.
Product Defect ID: QB-27013
Qlik NPrinting and Qlik Sense are installed on Azure cloud machines. The configuration respects all the requirements. in particular, the NPrinting Engine user is present on both the NPrinting and Sense servers with the same domain and SSID.
The Metadata reload test fails with a "Not a domain user" message. On the other side, the Metadata reload is successful when launched (ignoring the Test error) even if the NPrinting Engine logs show these error and warning messages:
Engine.Navigator.QlikSense.SDK.QlikSenseDiagnose 20231128T103337.642+01:00 ERROR NP-SERVER _NAME 0 0 0 0 0 0 0 0 PerformDiagnosis found a problem. ERROR: System.Exception: Not a domain User : Domain\NPUser↓↓ at Engine.Navigator.QlikSense.SDK.QlikSenseDiagnose.<>c__DisplayClass8_0.<PerformDiagnosis>b__3() in C:\Jws\release-may2023-SwCB9Sd4b\server\NPrinting\src\Engine.Navigator.QlikSense.SDK\QlikSenseDiagnose.cs:line 90↓↓ at Engine.Navigator.QlikSense.SDK.QlikSenseDiagnose.GetStep(DiagnoseStep step, Action stepCode) in C:\Jws\release-may2023-SwCB9Sd4b\server\NPrinting\src\Engine.Navigator.QlikSense.SDK\QlikSenseDiagnose.cs:line 40
Engine.Navigator.QlikSense.SDK 23.20.5.0 Engine.Navigator.QlikSense.SDK.QRSApi
20231128T103350.840+01:00 WARN NP-SERVER _NAME 0 0 0 0 0 0 0 0 Domain user check failed for Domain\NPUser. ERROR: System.Runtime.InteropServices.COMException (0x8007200A): The specified directory service attribute or value does not exist.↓↓↓↓ at System.DirectoryServices.DirectoryEntry.Bind(Boolean throwIfFail)↓↓ at System.DirectoryServices.DirectoryEntry.Bind()↓↓ at System.DirectoryServices.DirectoryEntry.get_SchemaEntry()↓↓
Ignore the error message and proceed with the metadata reload.
According to the current analysis, the error message is shown because Azure does not organize users and permissions as on-premise Windows servers do. NPrinting does not receive the expected answers from Azure AD Connect and interprets this as missing access levels in Azure during the connection tests.
On the other side, when the environment is correctly configured, the NPrinting Engine user has access to the Qlik Sense applications, so the metadata reload and the tasks executions are completed successfully at the end.
This issue can occur in non Azure environments as well. Proceed with the same solution.
Qlik Sense can process a maximum of 1,048,576 (2^20) characters by row when loading data from a CSV file. If a row in the source CSV file is longer than this limit, Qlik Sense automatically breaks it to multiple rows in the loaded table.
This doesn't happen when loading another file format (like XML) or loading the same CSV file in QlikView.
To increase the maximum length, please set parameter LongestPossibleLine in Settings.ini file of Qlik Sense Engine to a higher value than 1048576.
See How to modify Qlik Sense Engine's Settings.ini for detailed instructions of changing parameters in Settings.ini.
Qlik Sense engine supports up to 512 Megabytes (512*1024*1024) as line length. Script reload can handle strings up to this length in a single data cell. However, when using the data selection wizard, such long string may break the web socket. Therefore, maximum string length is limited to 1,048,576 characters to avoid this web socket issue.
A Qlik Replicate Child task being resumed using SCN fails with:
Cannot start processing from timestamp '2024-04-25 03:40:00.000000', the logstream only contains records with timestamp greater than or equal to '2024-04-25 08:45:02.000000'
Sometimes a specific version JVM is needed rather than the shipped JVM, for example due to a known vulnerability in the JVM, the existing JVM need to upgrade to a higher verion. However, the new jvm folder may not contain two required security configuration files, causing Replicate to generate the following warning message:
JVM security configuration directory is missing or not a directory; unable to set the Java security policy
This warning message is reported by Replicate due to the missing of the following two security configuration files:
To resolve this issue, you can simply copy these two files from your backup or another Replicate server into the <Replicate folder>\jvm\conf\security folder.
#00163870
Due to a limitation in MS-CDC source endpoint DDL changes on the source must be handled manually.
Table-level DDLs are not supported. When a table DDL is encountered, the DDL will be transferred to the target and the table will be suspended to allow the CT table to be manually aligned.
The following steps show how to handle a DDL change on a table.
Alternate DDL Change Handling for MS-CDC endpoint task only. Do not need to do this for a standard MS SQL endpoint.
1. Stop the Qlik Replicate task
2. Disable ms-cdc for the table in the source database
3. Modify the source table
4. Modify the target table
5. Start the task with metadata only run (Create missing tables and then stop). This will refresh the internal metadata without losing position.
6. Enable ms-cdc for the table (if task is not set to do it automatically)
7. Resume the task
Qlik Replicate
SQL Server MS-CDC
Limitations and considerations
Qlik Replicate: The metadata for source table 'table name' is different than the corresponding MS-CDC Change Table
If applying the R2024-05v2 or R2024-06 monthly patch on your CICD instance, and using Jenkins, many or all builds may start to fail without warning. The P2 may be commenting that it was not able to locate some of the component jars in the local .m2 repository; with the folder location mentioned being the "${user.HOME}/.m2/repository", instead of the local repository defined in the settings file. Even if using "-s", "-gs", and/or attempting to force the plugin to use a specific local repository, it will always override the location with ${user.HOME}, and fail to run properly.
This issue may only appear with those environments that use OpenJDK Zulu 17 at this time; instances that use Oracle JDK 17 or other OpenJDK distros do not appear to be affected at this time.
If customers have not upgraded to the R2024-05 monthly patch, we would suggest to continue using the R2024-04 monthly patch and 8.0.15 "builder-maven-plugin". If the upgrade has already happened or in progress, customers should check if they can swap to a different Open JDK instance (such as AdoptOpenJDK), or use Oracle JDK for the time being; until the issue has been fixed.
Currently, Qlik is on track to remediate the issue with a change to the plugins themselves, with a targeted date (at this time) of the July 2024 Monthly Patch.
Due to some changes in the newer versions of Jenkins (both LTS and the nightly versions), the plugins that are used with Jenkins remove some variables that are necessary to be passed to Maven and the P2 itself. Additionally, Zulu JDK has a problem with how the variables are being passed from Jenkins to the Java instance itself.
For additional questions or concerns, please reach out to Talend Support on this issue, and reference this internal defect ID, TUP-43304.
Parallel Load is often used to accelerate the replication of large tables by splitting the table into segments. Primary Key (or index) columns are not mandatory for Parallel Load; any column(s) can be used as a segment column as long as it can divide the data into segments. However, ROWID can sometimes be more efficient if the table has no Primary Key, no other indexes, and no other columns that can easily split the table rows.
Currently, Oracle data types ROWID/UROWID are not supported in the major versions of Qlik Replicate. As a result, columns with these data types will not be visible in the Qlik Replicate GUI, making it impossible to utilize ROWID directly in the Parallel Load design window.
This article provides a guide on how to utilize ROWID in Parallel Load.
CREATE VIEW scott.kitv1000rowid AS SELECT rowid||'' AS row_id, id, name, notes FROM scott.kitnopk1000 ORDER BY row_id;
The Hadoop target endpoint (Hortonworks Data Platform (HDP)) functioned correctly in Replicate 2022.11. However, after upgrading to 2023.11, the "Test Connection" operation fails, displaying the error "UUID not found." Consequently, we are unable to test the connection successfully. Additionally, the Replicate Server restarts unexpectedly each time the endpoint ping test is performed.
The error message displayed in the Manage Endpoint Connections window is as follows:
BTW, If "Hive access" in the endpoint is temporarily disabled (Set to "No Access") Test connection is succeeded.
export LD_LIBRARY_PATH=/lib64:/opt/attunity/replicate/lib:/opt/cloudera/hiveodbc/lib/64:$LD_LIBRARY_PATH
$ cd /opt/attunity/replicate/bin
$ source arep_login.sh
$ ./areplicate start
#00166187, #00155938, Jira QB-2724
Due to an issue with MySQL ODBC Driver 8.0.027 to 8.0.033, empty TEXT columns may not be replicated correctly during Full Load.
Qlik Replicate
MySQL ODBC Driver 8.0.027 to 8.0.033
Empty TEXT columns may not be replicated correctly during Full Load. For example, if one table row contains a TEXT column with a value and the same column in the next row contains an empty value (but not NULL), both rows will display the value of the first row on the target. See Limitations and considerations | Qlik Replicate Help.
When the source table has a text column that is blank (not null) the data from the previous row that had data is duplicated.
As an example if have the following 3 rows:
1 'SOME DATA'
2 null
3 '' <---empty string not null
The the output rows will have the following:
1 'SOME DATA'
2 null
3 'SOME DATA'
To resolve the issue, downgrade to MySQL ODBC 8.0.026.
The ODBC parameter no_ssps can also be used to resolve the issue.
Set it to no_ssps=1;
Until MySQL resolved the issue, the parameter should be set on Full Load ODBC connections. See 5.2 Connector/ODBC Connection Parameters | dev.mysql.com.