Search our knowledge base, curated by global Support, for answers ranging from account questions to troubleshooting error messages.
After an upgrade to Qlik Sense Enterprise on Windows May 2023 patch 3 or later, REST connections fail with the following error:
"WITH CONNECTION" is not allowed. To allow "WITH CONNECTION", select Allow "WITH CONNECTION" in the connector settings.
The Allow WITH CONNECTION option is not exposed in the REST Connector GUI and cannot be changed.
Qlik Sense Enterprise on Windows August 2023 and later are not affected.
To resolve the issue, upgrade to August 2023.
To manually mitigate the issue in the May release branch, replace the qsdfw_qsefw_qlikview.qliksourceconfig file. Qlik is investigating a solution which does not require manual intervention.
Backup the affected file before continuing with the workaround. When taking the backup, do not keep the backed up file in the same folder. (see end notes)
End notes:
The folder C:\Program Files\Common Files\Qlik\Custom Data\QvRestConnector can only contain one qliksourceconfig file with correct name: qsdfw_qsefw_qlikview.qliksourceconfig
If there are two qliksourceconfig files (even with different names, such as a backup), the first one in alphabetical order will be chosen.
QB-20735
Qlik Sense Enterprise on Windows May 2023 Patch 3 and later
Does not affect August 2023 and later releases.
QB-20735
Some connectors require an encryption key before you create or edit a connection. Failing to generate a key will result in:
Error retrieving the URL to authenticate: ENCRYPTION_KEY_MISSING - you must manually set an encryption key before creating new connections.
Qlik Sense Desktop February 2022 and onwards
Qlik Sense Enterprise on Windows February 2022 and onwards
all Qlik Web Storage Provider Connectors
Google Drive and Spreadsheets Metadata
PowerShell demo on how to generate a key:
# Generates a 32 character base 64 encoded string based on a random 24 byte encryption key
function Get-Base64EncodedEncryptionKey {
$bytes = new-object 'System.Byte[]' (24)
(new-object System.Security.Cryptography.RNGCryptoServiceProvider).GetBytes($bytes)
[System.Convert]::ToBase64String($bytes)
}
$key = Get-Base64EncodedEncryptionKey
Write-Output "Get-Base64EncodedEncryptionKey: ""${key}"", Length: $($key.Length)"
Example output:
Get-Base64EncodedEncryptionKey: "muICTp4TwWZnQNCmM6CEj4gzASoA+7xB", Length: 32
This command must be run by the same user that is running the Qlik Sense Engine Service (Engine.exe). For Qlik Sense Desktop, this should be the currently logged-in user.
Do the following:
Open a command prompt and navigate to the directory containing the connector .exe file. For example:
"cd C:\Program Files\Common Files\Qlik\Custom Data\QvWebStorageProviderConnectorPackage"
Run the following command:
QvWebStorageProviderConnectorPackage.exe /key {key}
Where {key} is the key you generated. For example, if you used the OpenSSL command, your key might look like: QvWebStorageProviderConnectorPackage.exe /key zmn72XnySfDjqUMXa9ScHaeJcaKRZYF9w3P6yYRr
You will receive a confirmation message:
Info: Set key. New key id=qseow_prm_custom.
Info: key set successfully!
The {sense service user} must be the name of the Windows account which is running your Qlik Sense Engine Service. You can see this in the Windows Services manager. In this example, the user is: MYCOMPANY\senseserver.
Do the following:
Open a command prompt and run:
runas /user:{sense service user} cmd. For example:runas /user:MYCOMPANY\senseserver
Run the following two commands to switch to the directory containing the connectors and then set the key:
"cd C:\Program Files\Common Files\Qlik\Custom Data\QvWebStorageProviderConnectorPackage"
QvWebStorageProviderConnectorPackage.exe /key {key}
Where {key} is the key you generated. For example, if you used the OpenSSL command, your key might look like: QvWebStorageProviderConnectorPackage.exe /key zmn72XnySfDjqUMXa9ScHaeJcaKRZYF9w3P6yYRr
You should repeat this step, using the same key, on each node in the multinode environment.
Encryption keys will be stored in: "C:\Users\{sense service user}\AppData\Roaming\Qlik\QwcKeys\"
For example, encryption keys will be stored in "C:\Users\QvService\AppData\Roaming\Qlik\QwcKeys\"
Note:
It is important to make sure to run the command prompt with Qlik Sense Service Account and has access to all the required folders/files.
This security requirement came into effect in February 2022. Old connections made before then will still work, but you will not be able to edit them. If you try to create or edit a connection that needs a key, you will receive an error message: Error retrieving the URL to authenticate: ENCRYPTION_KEY_MISSING) - you must manually set an encryption key before creating new connections.
Most Qlik Web Storage Provider Connectors require an encryption key. If no key exists, the following error will be displayed when authenticating:
Error retrieving the URL to authenticate: ENCRYPTION_KEY_MISSING - you must manually set an encryption key before creating new connections.
General information, as well as a list of which connectors require an encryption key, can be found in Setting an encryption key | Qlik Connectors Help.
The actual generation of a key depends on your organization's best practices, but we can provide you with an example on how to go about it with OpenSSL:
The below example is not supported by Qlik Support. Review it with your local security office to see what method your organization follows.
Follow the instructions on how to set the encryption key in Qlik Sense Enterprise on Windows and/or Qlik Sense Desktop:
Setting an encryption key on Qlik Sense Enterprise on Windows
Setting an encryption key on Qlik Sense Desktop
Setting an encryption key | Qlik Connectors Help
Setting an encryption key on Qlik Sense Enterprise on Windows
Setting an encryption key on Qlik Sense Desktop
The following error can occur when using SAP BW Connector to extract data from InfoProvider/CompositeProvider.
--------
[ERR] [QVX] RfcAbapException:JOB_CANCELLED
Job aborted, check log for Job /QTQVC/READ_MPDATA in Job Overview (transaction SM37)
---------
Reported bug QB-18952 was closed by R&D as WAD(working as designed) .
Information provided on this defect is given as is at the time of documenting. For up to date information, please review the most recent Release Notes, or contact support with the ID QB-18952 for reference.
From the SAP System,
The allowed maximum width is 8192 characters.
From the INFOPROVIDER, if the field is of, for example 15282 characters wide, a simple workaround would be to split the extraction into two or more extractions. Then in Qlik Sense or QlikView, depending on your data model, use JOIN / CONCATENATE.
Product Defect ID: QB-18952
The native Qlik PostgreSql connector (ODBC package) does not import strings with more of 255 characters. Such strings are just cut off, the following characters are not shown in the applications. No warnings are thrown during the script execution.
The problem affects the Qlik connector but not all the DNS drivers.
Qlik Sense February 2023 and higher versions
Qlik Cloud
When TextAsLongVarchar is set, and the Max String Length is set to 4096, 4096 characters are loaded.
Notice that there still are limitations to this functionality related to the datatype used in the database. Data types like text[] are currently not supported by Simba and they are affected by the 255 characters limitation even when the TextAsLongVarchar parameter is applied.
Qlik has opened an improvement request to Simba to support them.
As workaround, it is possible to test a custom connector using a DSN driver to deal with these data types.
QB-21497
If Oracle is configured with a Physical Standby, and the Oracle LOG_ARCHIVE_DEST_n initialization parameter is configured with the DELAY option, running a Full Load and Apply Changes task on an Oracle Physical Standby instance may result in missing or erroneous data in the target database.
The following warning may be thrown in the log files:
Using standby database could have caused the missing events problem during Full Load and CDC synchronization.
If you do not have the DELAY parameter configured and there is no data loss, this warning message can be ignored. Please check with your DBA if the DELAY parameter is set.
This issue can be resolved using an internal parameter: standbyDelayTime, which should be set to a value little bit bigger than the Delay parameter of LOG_ARCHIVE_DEST_n initialization parameter. Units are in minutes.
For information on how to set internal parameters, see Qlik Replicate: How to set Internal Parameters and what are they for?
This solution may lead to redundant records for a table without PK/UI if there were INSERT operations during that period. Example - LOG_ARCHIVE_DEST_2 = 'DELAY=10'
It may be necessary to remove line feeds (LF) and carriage returns (CR) from BLOB or CLOB datatype fields. However, Qlik Replicate tasks have a limitation on LOB fields preventing functions from being used on them.
Use a source lookup to retrieve the field value, bypassing the inherent data type.
The general form of the function is as follows:
source_lookup(TTL,'SCHM','TBL','EXP','COND',COND_PARAMS)
Use of the standard replace chars function:
replaceChars(X,Y,Z)
Combined usage for both functions for the transformation expression:
ReplaceChars(source_lookup(1,'DBO','table','field','id=?',$id),X'0A','|')
Qlik Replicate Transformation: Source Lookup - CLOB datatype - limitations on string functions
Data Enrichment functions
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
Qlik Sense or QlikView may not load all data from a CSV file.
Sometimes a special character is used to mark the end of the file. If this character is found in the middle of a text file (for example, enclosed by quotation marks), the file may not load correctly. In this instance, we can modify how Qlik Sense and QlikView read data.
Qlik Sense:
QlikView
Due to a problem with quotes or special characters in the file. See Help Site File Wizard: Type
Qlik Sense all supported versions, including Qlik Cloud
QlikView all supported versions
After an upgrade of Qlik Replicate to May 2023 (2023.5.0.322), Databricks tasks fail with the following error:
[METADATA_MANAGE ]E: The host configured in the proxy settings is not valid [1020401] (cloud_imp.c:2138)
[METADATA_MANAGE ]E: Failed while preparing stream component 'Databricks_AdageAZ'. [1020401] (metadatamanager.c:855)
[METADATA_MANAGE ]E: Cannot create the target utility component [1020401] (metadatamanager.c:760)
[TASK_MANAGER ]E: Creating Metadata Manager's utility components failed [1020401] (replicationtask.c:3896)
[TASK_MANAGER ]W: Task 'Adage_AZ_Apply_Changes_All' encountered a fatal error (repository.c:6064)
An upgrade of Qlik Replicate may set the proxy settings in the Databricks Lakehouse (Delta), even if no proxy has previously been used. Disable the proxy to resolve the issue.
In the data integration process from a Salesforce source to a Snowflake target, it's crucial to ensure all records are accurately captured. However, sometimes gaps can occur in Change Data Capture (CDC) operations, resulting in missing records.
In this article, we aim to explore such an issue and guide you through the required troubleshooting steps and eventual solution.
One of the source table in Salesforce is 'WorkOrder,' and discrepancies in data capture were observed in the logs. The root of the problem becomes evident when examining consecutive SELECT queries:
Timestamp: 2023-07-17 13:41:04.251
Query: SELECT count() FROM WorkOrder WHERE SystemModstamp > 2023-07-17T17:24:28.000Z and SystemModstamp < 2023-07-17T17:40:04Z
Timestamp: 2023-07-17 13:56:31.411
Query: SELECT count() FROM WorkOrder WHERE SystemModstamp > 2023-07-17T17:41:04.000Z and SystemModstamp < 2023-07-17T17:55:31Z
Upon closer examination, it's evident that the tracking column 'SystemModstamp' in Query 2, which is "> 2023-07-17T17:41:04.000Z," does not align with the previous query's 'SystemModstamp' value, which was "< 2023-07-17T17:40:04Z."
Consequently, any changes within this time period (SystemModstamp between '>= 2023-07-17T17:40:04Z' and '<= 2023-07-17T17:41:04.000Z') were not captured, leading to missing records.
After a thorough analysis, the root cause of this issue was identified. The transaction consistency timeout of 60 seconds was found to be inadequate. To address this problem effectively, the following steps are recommended:
By following these steps, you can enhance the accuracy and completeness of your data capture process when loading data from a Salesforce source to a Snowflake target. This proactive approach will help prevent future instances of missing records and maintain the integrity of your data integration pipeline.
The Qlik Compose Monitor UI does not show Inserted Rows or Updated Rows for the data mart even though the rows are inserted and updated in the database.
Upgrade to 2022.5.1032 (2022, May – sp13).
RECOB-7493
Information provided on this defect is given as is at the time of documentation. For up-to-date information, please review the most recent Release Notes with RECOB-7493 for reference.
Qlik Compose 2021.05 - sp11 (2022.5.0.958)
Creating and adding a new target endpoint to Azure SQL DB and testing the connection fails with:
SYS-E-HTTPFAIL, Failed to load dynamic endpoint properties.
SYS,GENERAL_EXCEPTION,Failed to load dynamic endpoint properties,Failed to get 'SQLServer_New' syntax
Review the active Internal Parameters set in the endpoint.
Due to an issue with MySQL ODBC Driver 8.0.027 to 8.0.033, empty TEXT columns may not be replicated correctly during Full Load.
Running the full load on a table, it appears the data alignment is off because the target .csv file has null values when the source field is defined as not nullable.
Qlik Replicate
Aurora Postgres
Verify the source table DDL:
Examining the source table DDL will show a datatype with no precision (varchar NOT NULL,).
Example:
CREATE TABLE tran.category ( | ||
id | integer NOT NULL, | |
name | varchar NOT NULL, | |
description | varchar NOT NULL, | |
category | varchar NOT NULL, | |
is_active | boolean NOT NULL DEFAULT true, | |
create_employee | integer NOT NULL, | |
create_datetime | timestamp NOT NULL, | |
PRIMARY KEY(id) | ||
) |
This is a documented limitation. See VARCHAR(n).
VARCHAR without a length (n) is not recognized as a valid data type by target endpoints. Consequently, if a source column data type is set to VARCHAR without an explicit length, Replicate will set a default length of 8000 bytes.
You can change the default by deploying an internal parameter, unboundedVarcharMaxSize, and setting a custom value.
This task needs to capture soft deletes and allow the source table to reuse primary key values without a PK violation on the target.
Setting up this task requires four steps:
Adding the two extra fields to the target side primary key and having the NOW timestamp ensures that the target record will be unique compared to the source record with only one field primary key.
This will allow the source to reuse a deleted records primary key without throwing a PK violation on the target. The transformations on the fields set unique values into the additional key segments.
Setting error handling and the apply conflicts handling policy to Upsert Mode (No record found for applying an Update: INSERT the messing target record) normally works by converting the source statement (whether Insert, Update, Delete) into a Delete followed by and Update. The unique three segment primary key is not able to be found during the delete which is why the records persist on the target.
Example source table structure:
Note the single field Primary Key MyPK.
In our example, we have added Operation and OperationDateTime as the additional fields.
Example source table records:
In the Table Settings for your task, go to Transform and mark your newly added fields as target side primary keys.
In our example, we have marked Operation and OperationDateTime.
Transformation are added through the Expression Builder, located in the Table Settings. You will modify the fields marked as keys in the previous step, Operation and OperationsDateTime.
Operation: set the expression to INSERT, UPDATE or DELETE based on the source transaction.
Transformation function:
CASE
WHEN $AR_H_STREAM_POSITION = ''
THEN "INSERTED"
ELSE operation_indicator("DELETED", "UPDATED", "INSERTED" )
END
OperationDateTime: set the expression to to NOW date time
Transformation function:
Datetime('now')
Show results of updates and deletes on the target table
Results in target table after updates and deleting source record #6 and reuse of primary key on source.
Target table after full load:
Target table results after updates and delete on source:
Target table after reuse of source primary key new record inserted.
Results show that the target table is acting like an Audit table; all source transactions are stored.
The target table can have multiple records that have the same primary key on the source (MyPk 6).
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
Global expressions are not propagated to all tables.
Defining a soft delete transformation can be done on each individual table as needed. However, if you want to do it for all tables in the task you will want to use a Global Transformation.
Issue
The issue manifested when defining a Global Expression to do a soft delete.
The field is supposed to be automatically added to all tables in the task. The field was not added to all tables. It was random results as some tables had the field and others did not.
When reviewing the process of defining the expression we noticed there are two screens which would allow you to enter the expression.
When we used both the Transformation Scope / Advanced Options screen and the Transformation Action / Value screen, not all tables got the field.
Example: Transformation Scope, with Advanced Options expanded:
Resolution
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
Transformation - Operation Indicator - Archive User - Soft Delete
MySQL allows the zero date (and/or zero month) in data rows, which is a specific behaviour of MySQL and does not exist in most other RDBMS. This behaviour causes invalid date errors while replicating data to heterogeneous databases.
In this article, we will explain the solutions in both Full Load and CDC stages.
To handle zero date in Qlik Replicate for both Full Load and CDC stages:
MySQL provides an ODBC parameter ZERO_DATE_TO_MIN to control this behavior. please take note MySQL adds one day for zero date values.
Based on this setting, the Full Load stage gets a non-zero date. For example, the source value 2022-12-00 in MySQL will become 2022-12-01 in the target endpoint after the Full Load is finished.
The Full Load steps will not have an impact on the CDC stage. That means the source value 2022-12-00 will be 2022-12-00 still in the CDC stage. To mitigate this, we need to create a transformation to replace the zero date 00 with 01 as we do in the Full Load stage.
The simple global transformation expression can be:
date(substr($AR_M_SOURCE_COLUMN_DATA,1,8)||replace(substr($AR_M_SOURCE_COLUMN_DATA,9,2),'00','01'))
For more information on how to set transformations, see: Starting the Global Transformation Rules wizard.
Transformation: Date Time - Invalid Value - MySQL
Qlik Replicate all versions
MySQL all versions
Support case #00107317
An Automation fails with the below error when trying to run,
The following error occurred: Connector *********:****** not found (: Requested endpoint could not be provisioned due to failure to acquire a load slot: The user does not have permission to the gateway (DirectAccess-0002) LIB CONNECT TO '*********:******') The error occurred here: LIB CONNECT TO '*********:******'
The User
The User needs at least "CanConsume" on both spaces:
Note that those spaces might be one and the same, but they can also be separate.
Error "DirectAccess-0002" indicates that the User does not have permission to the gateway.
In some scenarios we need stop and resume a task. However for Hana trigger-based CDC task, if the option "Use log table" is enabled, resuming the task, or while you trying to startup the task from a given timestamp, you may get the below information (with SOURCE_CAPTURE set to Trace):
2021-09-09T22:12:34:881897 [SOURCE_CAPTURE ]T: Resume from 'timestamp:2021-09-09T20:30:00'
2021-09-09T22:12:34:881897 [SOURCE_CAPTURE ]T: Start from timestamp is not supported in the Log-Table mode
The option "Use log table" is enabled by an Internal Parameter in versions 6.6/7.0. In version 2021.5 it's in GUI property like below:
Start the task by source change position, rather than the timestamp. Here is a query to prepare the stream position in the correct format by a timestamp:
;Get the stream position by timestamp
;Before using the query, please replace the $SCHEMA$ to your artifacts schema where the table "attrep_cdc_log" is located.
;The timestamp format is 'YYYY-MM-DD HH:MM:SS' (Note, the timestamp should be the one on HAHA Server side, not the replicate server's)
;The range will be from the first (not included) event that happened after the time specified to the last (included) event in the table
SELECT 'V1:'||MIN("$rowid$")||':'||MAX("$rowid$")||':0:'||MIN("$rowid$") AS "STREAM POSITION" from "$SCHEMA$"."attrep_cdc_log" WHERE "CHANGE_EVENT_TIME" >= 'YYYY-MM-DD HH:MM:SS'
A unique row will be returned:
V1:670636:686112:0:670333
Then in the Replicate UI, start the task by source change stream position:
Salesforce support case #02260694.
"C:\Program Files\NPrintingServer\Settings\SenseCertificates"
NOTE: Reminder that the NPrinting Engine service domain user account MUST be ROOTADMIN on each Qlik Sense server which NPrinting is connecting to.
The Qlik NPrinting server target folder for exported Qlik Sense certificates
"C:\Program Files\NPrintingServer\Settings\SenseCertificates"
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.