Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
The purpose of this post is to help you install the database drivers necessary to allow your Qlik Data Gateway to communicate with your company's servers once you have completed the Qlik Data Gateway installation itself.
If you are anything like me, perhaps you panicked a bit at the thought of installing the Qlik Data Gateway in a Linux environment. I have a lot of experience with .EXE installations in Windows environments. You know "Next – Next – Next – Finish." But an .RPM file? I had never even see that extension type before. If you were a Linux connoisseur beforehand, you probably guessed that my image for this post is an homage to the Fedora flavor of Linux. Otherwise you just thought it was an advertisement for the new "Raiders of the Lost Data" movie.
In any event, by now you have created your first Data Gateway, applied the registration key, completed the setup instructions and thankfully the command to check your Data Gateway service shows that it is running.
When you go back to the Data Gateway section of the Management Console and do a refresh your eyes fill you with happiness because your brand spanking new Data Movement Gateway shows "Connected.”
A lesser person would go celebrate right now. But you've decided to try and connect to a source before doing your happy dance. So, you create a new Data Integration project to the destination of your choice. While you will ultimately have many different data sources, let's imagine that you decide to start with a "SQL Server (Log Based)" connection, as your first source test.
You input the server connection details, but your SQL Server doesn't use a standard port for security. Finally, you find information online that you should input your server IP followed by a "comma and the port #". As an example, if your servers IP is 39.30.3.1 and your security port is 12345 you would input "39.30.3.1,12345'. Next you input the user and password credentials. Your last step is to choose the database. Easy peezy, lemon squeezey. Right?
You press the "Load databases" button but suddenly a dialog comes up telling you that the Data Gateway can't connect because it can't find a SQL Server driver.
Your heart starts beating quickly but naturally as a pro, you remain calm on the outside. Eventually you realize that whether on Windows or Linux, applications have always required drivers to communicate with servers. This is nothing new, we just got excited when we saw that connected message and thought we were done. Upon going back to the setup guide
you realize that there is in fact a link labeled "Setting up Data Gateway – Data Movement source connections."
So, you go ahead and click the link and it takes you to:
Wow, so many sources, and so many additional links to click to ensure the required drivers are in place for the sources your company will need. All the documentation is there, but I know firsthand that it can get a bit overwhelming, especially if Linux isn't your native language, which is the reason for this post.
Obviously every one of you reading this works in an environment that may require different data source connections than the others. Thus, there is no way for me to predict and help with your exact configuration. However, odds are strong that most of you likely require at least: SQL Server, Databricks, Snowflake, Postgres or MySQL, various combinations of them, or perhaps all of them.
As tedious, or imposing as it may be, I highly recommend you walk through the documentation for each data source you will need. But thanks to my buddy John Neal, I have attached a Linux shell script that can be executed to configure all 5 of those data sources for you. Given the many flairs and versions and configurations of Linux I can't ensure that it will work for everyone, but at least it is a start for those that may want to press an easy button, and those that like me may be somewhat or brand new to Linux.
If you choose to take advantage of it, understand that it is only being offered a shelp, and is not meant to replace the documentation. To utilize it you will need to do the following (Please note in my examples I have changed to the root user. If you are logged in as a normal user account, you may need to use SUDO "super user do"):
If all went well with the installation your output should look like similar to the following image that was part of my file:
It's almost time to do our happy dance, but let's hold off until we test. In my starting example I asked you to assume we wanted to test against a "SQL Server (Log Based) connection." When we left off it was because we got an error message we had no driver while trying to load the list of databases. I will try that again.
Oh no, the heart rate is going up again.
We have successfully installed the Qlik Data Gateway. We have successfully installed the required drivers. Yet, we are getting this new error message. Let's focus on our breathing and try and digest the situation. What could cause our attempt to connect to our data source to timeout? I got it.
It's likely network security. We know what we want to talk to. We know the location. We know the credentials. But our networks aren't always wide open to do the talking. Resolving your connectivity/firewall issues may or not be with your abilities and if you are like me, you may need to seek the help of your IT/Networking team.
When I reached out to my friendly IT guru, here within Qlik, he was able to help me get everything in place so that my Linux server could speak with my database servers, including all of the needed ports.
Once they were completed I was able to test and sure enough my data connection succeeded.
Whether or not you do a happy dance, as I did, I hope that this post has helped you get to that sweet smell of success. After all, someone has to be known as the amazing person who got your Qlik Data Gateway going so that others in the Data Engineering team could create all of those lights out Qlik Cloud Data Integration projects that would be feeding data in near real time to all of those wonderul analytics use cases. Hopefully with the help of the documentation and this post, that person is you my friends.
Challenge
One of the things I've long admired about the Qlik Community is their willingness to help each other through this Community site. If you are a Linux guru and are so inclined I would love to see you share other versions of the shell script that I have started. Maybe your organization is using another flair/version of Linux and you needed to make a few tweaks to my file. Maybe your organization needed Oracle added and you can tweak my file. Whatever the reason, I sure hope you will give back to the community by sharing all of those tweaks here. Who knows, your help might help them be able to do their happy dance. And we all know the world is a better place when more people do their happy dance.
Related Content
Qlik Data Gateway - Data Movement prerequisites and Limitations - https://help.qlik.com/en-US/cloud-services/Subsystems/Hub/Content/Sense_Hub/Gateways/dm-gateway-prerequisites.htm
Setting up the Data Movement gateway - https://help.qlik.com/en-US/cloud-services/Subsystems/Hub/Content/Sense_Hub/Gateways/dm-gateway-setting-up.htm
PS - I created both of the images here using a generative AI solution called MidJourney. I hope they've added to the fun of this post.
Qlik Talend Studio job fails connecting to the Data Stewardship Application in Qlik Talend Cloud with the error below:
tDataStewardshipTaskInput_1 Unable to connect to Talend Data Stewardship.
java.io.IOException: Unable to connect to Talend Data Stewardship.
Caused by: java.net.UnknownHostException: No such host is known (tds.eu.cloud.talend.com)
This error indicates that the Qlik Talend Studio job was unable to resolve the hostname to an IP address at that instant, in this case, Qlik Talend Cloud Data Stewardship.
The Data Load Editor in Qlik Sense Enterprise on Windows 2025 experiences noticeable performance issues.
Qlik Sense May 2025 SR 6 and higher releases.
A workaround is available. It is viable as long as the Qlik SAP Connector is not in use.
No service restart is required.
SUPPORT-6006
Beginning from Qlik Replicate version 2024.05, a new checkbox was added to Log Stream Staging tasks: Retrieve all source columns on UPDATE
The option is available in Task Settings (A) > Change Processing > Change Processing Tuning (B)
It is enabled (C) by default.
When Retrieve all source columns on UPDATE is enabled, it will cause any table added to the task to issue an ALTER on the table to enable supplemental logging on all columns if the source database is Oracle.
For high-transaction tables, enable Supplemental Logging on all columns during off-peak hours manually before adding them to the Qlik Replicate task.
In previous Qlik Replicate versions, supplemental logging was not required on all columns and was enabled only on Primary Key Columns. But with that new checkbox, it is required to be added to all columns.
When any new table is added to the Log Stream Staging task, Qlik Replicate issues an ALTER TABLE command to enable Supplemental Logging on all columns. This command can fail on high-transaction or busy tables in the source Oracle DB.
The information in this article is provided as-is and to be used at own discretion. Depending on tool(s) used, customization(s), and/or other factors ongoing support on the solution below may not be provided by Qlik Support.
When using Key Pair authentication and creating a new Snowflake connection, you might encounter the following error:
Illegal Argument
The provided private key file (.p8) is not in the correct format, or the key file password is invalid.
To get the actual error, install SnowSQL utility in the Linux machine where the Qlik Data Movement gateway is installed and try to connect to the same account:
snowsql -a <account> -u <username> --private-key-path <path to file/rsa_key.p8>
This will provide the exact error on why the connectivity is failing and assist in identifying which root cause applies.
Starting from Qlik Replicate versions 2024.5 and 2024.11, Microsoft SQL Server 2012 and 2014 are no longer supported. Supported SQL Server versions include 2016, 2017, 2019, and 2022. For up-to-date information, see Support Source Endpoints for your respective version.
Attempting to connect to unsupported versions, both on-premise and cloud, can result in various errors.
Examples of reported Errors:
The system view sys.column_encryption_keys is only available starting from SQL Server 2016. Attempting to query this view on earlier versions results in errors.
Reference: sys.column_encryption_keys (Microsoft Docs)
Upgrade your SQL Server instances to a supported version (2016 or later) to ensure compatibility with Qlik Replicate 2024.5 and above.
00375940, 00376089
Qlik Sense Connectors are missing from the Data source except few REST connectors.
Repair Qlik Sense with the Qlik Sense Setup file (identical version).
Encryption keys:
Encryption keys will be stored in either "C:\Users\{sense service user}\AppData\Roaming\Qlik\QwcKeys\" Or "C:\Users\{sense service user}\AppData\Roaming\Qlik\Keys\"
An error occurred / Failed to load connection error message in Qlik Sense - Server Has No Internet
After upgrading Qlik Talend Studio to patch R2025-08 or later, jobs using the tS3Connection or tS3List components fail with the error:
Exception in component tS3List_1
java.lang.IllegalStateException: Connection pool shut down
An additional change with the new SDK version is how the AWS "region" is handled. In Qlik Talend Studio R2025-07 and earlier, the Region and Endpoint field is shown as a dropdown below.
Using the DEFAULT value works for most cases:
With the new SDK 2.x, the region field is less flexible. The region must explicitly be defined to work correctly. To resolve connectivity issues after upgrading, it is best to explicitly define the AWS region used under the new Region text field:
With the release of R2025-08, Qlik Talend Studio migrated AWS component dependencies from Amazon Web Services SDK version 1.x to SDK version 2.x. This move was prompted by SDK 1.x having reached end of life as of December 31, 2025.
The Amazon DynamoDB, Amazon SQS, and Amazon S3 components were all updated. For the full release notes, see R2025-08 Talend Studio 8.0 - New Features.
When using IBM DB2 for iSeries as a source in Qlik Replicate, the task may report a warning if journal receiver numbers are not continuous.
A typical warning message looks like:
[SOURCE_CAPTURE ]W: Journal entry sequence '2026' was read from journal receiver 'APSUPDB.QSQJRN0118'. The previous entry was read from receiver 'APSUPDB.QSQJRN0116'. Check if a receiver has been detached. (db2i_endpoint_capture.c:1836)
Qlik Replicate reports this condition as a warning only. There is no impact on task execution or data integrity:
This warning can be safely ignored unless accompanied by other errors or abnormal task behavior.
On the IBM DB2 for iSeries side, 'Check if a receiver has been detached' can occur if, for example, the process is holding or locking the journal. This temporarily prevents the system from creating or attaching the next journal receiver. In such cases, a receiver number may be allocated but never successfully created, resulting in a gap in the receiver numbering.
This behavior is normal on IBM i and does not indicate a defect. The system assigns journal receiver numbers, but sequential continuity is not guaranteed. IBM i only guarantees that receiver numbers increase monotonically, not that every number will exist.
00420963, 00423959
Using DB2 LUW (ODBC) as a target with a replication of a text column, the following error may be encountered:
[TARGET_APPLY ]T: RetCode: SQL_ERROR SqlState: 42846 NativeError: -461 Message: [IBM][CLI Driver][DB2/AIX64] SQL0461N A value with data type "SYSIBM.LONG VARGRAPHIC" cannot be CAST to type "SYSIBM.TIMESTAMP". SQLSTATE=42846 [1022502] (ar_odbc_stmt.c:2864)
The May 2026 version of Qlik Replicate contains a native DB2 LUW endpoint capable of processing the datatype correctly to prevent the error.
Remove any tables that have multiple Text columns.
SUPPORT-8188
A partial reload fails when Applymap() is used in a load statement that is not part of the partial reload itself.
This affects any partial reloads.
This limitation can block the distribution list import in In-application reporting. In fact, when a distribution list is added by uploading a source file, a new section (Distribution List) is automatically generated in the application’s load script, and a partial reload for this session starts automatically.
If Applymap() is used anywhere in the application script, the partial reload fails, and the recipient list can't be imported.
This is currently considered expected behavior in Qlik Sense Enterprise on Windows and Qlik Cloud Analytics. There are possible workarounds to address partial reload failures.
If the problem is limited to In-application reporting, it is possible to run a full reload of the application from the HUB once the Distribution List session is generated in the app script. Notice that this may cause a consumption of resources that can be charged to the tenant.
A more general workaround is to conditionally ensure that certain operations are limited to the partial reload only, using a schema like this in the script:
if IsPartialReload() then
*****
here the script involving mapping and Applymap()
*****
else
***
the rest of the script
***
endif;
A similar approach is to use the partial reload prefix on the mapping table like:
"Mapping add LOAD".
It may be necessary to work with conditions for operations like "drop field" in this case, dependent on whether the referenced field exists or not in the given reload context.
Here are two example scripts showing two possible methods.
Method One:
if IsPartialReload() then
Replace Load 'IsPartial' as Status autogenerate 1;
else
Load 'IsNormal' as Status autogenerate 1;
// Load mapping table of country codes:
map1:
mapping LOAD *
Inline [
CCode, Country
Sw, Sweden
Dk, Denmark
No, Norway];
// Load list of salesmen, mapping country code to country
// If the country code is not in the mapping table, put Rest of the world
Salespersons:
LOAD *,
ApplyMap('map1', CCode,'Rest of the world') As Country
Inline [
CCode, Salesperson
Sw, John
Sw, Mary
Sw, Per
Dk, Preben
Dk, Olle
No, Ole
Sf, Risttu
] ;
// We don't need the CCode anymore
Drop Field 'CCode';
endif;
Partial_reload_Data:
Add only LOAD * inline [
Salesperson, CCode
Pierre, Sw
Viggo, Sw ];
Method Two:
if IsPartialReload() then
Replace Load 'IsPartial' as Status autogenerate 1;
else
Load 'IsNormal' as Status autogenerate 1;
end if;
// Load mapping table of country codes:
map1:
mapping add LOAD *
Inline [
CCode, Country
Sw, Sweden
Dk, Denmark
No, Norway
] ;
// Load list of salesmen, mapping country code to country
// If the country code is not in the mapping table, put Rest of the world
Salespersons:
LOAD *,
ApplyMap('map1', CCode,'Rest of the world') As Country
Inline [
CCode, Salesperson
Sw, John
Sw, Mary
Sw, Per
Dk, Preben
Dk, Olle
No, Ole
Sf, Risttu
] ;
// We don't need the CCode anymore
if not IsPartialReload() then
Drop Field 'CCode';
end if;
Partial_reload_Data:
Add only LOAD * inline [
Salesperson, CCode
Pierre, Sw
Viggo, Sw ];
This behavior is due to a known limitation in Qlik Sense Enterprise on Windows and Qlik Cloud Analytics.
QB-5181
When connecting to the Google Drive Spreadsheet connector, some date values are fetched as text.
For example, there is a date table like the following:
In the Data Load editor:
However, in the fetched results, the first two columns are returned as text:
This is working as designed when using the Google Drive and Spreadsheet connector.
Two possible workarounds exist.
SUPPORT-8842
When using SAP HANA as a source in Qlik Replicate, Qlik Replicate does not fully handle the DECIMAL CS_DECIMAL_FLOAT datatype by default. This can lead to a loss of precision during replication.
For example, the value 345.56 in Hana replicated as 345 in Google BigQuery, a generic File target, or other target endpoints.
Assume we have a table defined as below:
create column table JOHNW.TESTDEC (
ID integer not null primary key,
name varchar(20),
dec1 DECIMAL(38,4) CS_FIXED,
dec2 DECIMAL CS_DECIMAL_FLOAT);
INSERT INTO johnw.testdec VALUES (1,'test',234.45,345.56);
There are two possible solutions.
CREATE OR REPLACE VIEW johnw.testdec_view2 AS
SELECT
id,
name,
dec1,
CAST(dec2 AS DECIMAL(30,4)) AS dec2
FROM johnw.testdec;
source_lookup('NO_CACHING','JOHNW','TESTDEC','DEC2','ID=:1',$ID)
You may combine this with a CAST to enforce the desired precision or any other formatting.
The DEC2 column is not a standard fixed DECIMAL.
Qlik Replicate cannot handle it correctly by default in the current versions.
Some connectors require an encryption key before you create or edit a connection. Failing to generate a key will result in:
Error retrieving the URL to authenticate: ENCRYPTION_KEY_MISSING - you must manually set an encryption key before creating new connections.
Qlik Sense Desktop February 2022 and onwards
Qlik Sense Enterprise on Windows February 2022 and onwards
all Qlik Web Storage Provider Connectors
Google Drive and Spreadsheets Metadata
PowerShell demo on how to generate a key:
# Generates a 32 character base 64 encoded string based on a random 24 byte encryption key
function Get-Base64EncodedEncryptionKey {
$bytes = new-object 'System.Byte[]' (24)
(new-object System.Security.Cryptography.RNGCryptoServiceProvider).GetBytes($bytes)
[System.Convert]::ToBase64String($bytes)
}
$key = Get-Base64EncodedEncryptionKey
Write-Output "Get-Base64EncodedEncryptionKey: ""${key}"", Length: $($key.Length)"
Example output:
Get-Base64EncodedEncryptionKey: "muICTp4TwWZnQNCmM6CEj4gzASoA+7xB", Length: 32
This command must be run by the same user that is running the Qlik Sense Engine Service (Engine.exe). For Qlik Sense Desktop, this should be the currently logged-in user.
Do the following:
Open a command prompt and navigate to the directory containing the connector .exe file. For example:
"cd C:\Program Files\Common Files\Qlik\Custom Data\QvWebStorageProviderConnectorPackage"
Run the following command:
QvWebStorageProviderConnectorPackage.exe /key {key}
Where {key} is the key you generated. For example, if you used the OpenSSL command, your key might look like: QvWebStorageProviderConnectorPackage.exe /key zmn72XnySfDjqUMXa9ScHaeJcaKRZYF9w3P6yYRr
You will receive a confirmation message:
Info: Set key. New key id=qseow_prm_custom.
Info: key set successfully!
The {sense service user} must be the name of the Windows account which is running your Qlik Sense Engine Service. You can see this in the Windows Services manager. In this example, the user is: MYCOMPANY\senseserver.
Do the following:
Open a command prompt and run:
runas /user:{sense service user} cmd. For example:runas /user:MYCOMPANY\senseserver
Run the following two commands to switch to the directory containing the connectors and then set the key:
"cd C:\Program Files\Common Files\Qlik\Custom Data\QvWebStorageProviderConnectorPackage"
QvWebStorageProviderConnectorPackage.exe /key {key}
Where {key} is the key you generated. For example, if you used the OpenSSL command, your key might look like: QvWebStorageProviderConnectorPackage.exe /key zmn72XnySfDjqUMXa9ScHaeJcaKRZYF9w3P6yYRr
You should repeat this step, using the same key, on each node in the multinode environment.
Encryption keys will be stored in: "C:\Users\{sense service user}\AppData\Roaming\Qlik\QwcKeys\"
For example, encryption keys will be stored in "C:\Users\QvService\AppData\Roaming\Qlik\QwcKeys\"
Always run the command prompt while logged in with the Qlik Sense Service Account which is running your Qlik Sense Engine Service and which has access to all the required folders and files.
This security requirement came into effect in February 2022. Old connections made before then will still work, but you will not be able to edit them. If you try to create or edit a connection that needs a key, you will receive an error message: Error retrieving the URL to authenticate: ENCRYPTION_KEY_MISSING) - you must manually set an encryption key before creating new connections.
After upgrading to QlikView September 2025 IR, scheduled Publisher reload tasks fail with the following error:
Error: Connector connect error: Bundled QVConnect not found.
The same .qvw document can be reloaded successfully using QlikView Desktop.
The issue is caused by QCB-33101, which has been resolved in QlikView September 2025 SR1. Upgrade to the latest available version.
See QlikView September 2025 Release Notes for details.
QCB-33101
Is it possible to integrate Salesforce Change Data Capture (CDC) with Qlik Talend Studio or Talend Streaming?
Not directly. Salesforce Change Data Capture (CDC) enables real-time event-driven integration by publishing data changes (create, update, delete, undelete) from Salesforce objects, which Talend Streaming alone does not natively support.
To consume Salesforce CDC events and integrate them with Talend Streaming, use Talend ESB. It can be deployed as an integration layer to receive CDC events and forward them to downstream streaming or messaging systems.
The typical process flow will be as follows:
This architecture enables near real-time data processing while keeping Talend Streaming decoupled from Salesforce-specific connectivity.
Talend ESB leverages Apache Camel, including the Salesforce Camel component, to support CDC-based integrations.
Key capabilities include:
For detailed documentation, see:
To use Talend ESB/Runtime, you must have a Premium or Enterprise subscription. Talend default licenses, such as Data Integration, do not include Talend ESB/Runtime. See Qlik Talend Cloud® Plans and Pricing for Talend Data Integration and ESB pricing details.
If you are uncertain what your subscription includes, contact your Qlik account representative.
To integrate Salesforce CDC with Talend Streaming:
Need more direct help? Contact your Qlik account representative for technical and architecture guidance.
By default, Qlik Replicate reads primary keys from source tables and creates target tables using those same keys. If you want to use an existing view that doesn’t share the same key columns, you can modify the replication process to define matching key columns and adjust the task settings to prevent it from reloading the target table.
In table transformations, use Set Key Columns > Use transformation definition to ensure the key columns match the target view.
But using Views as the target (instead of a table) will result in this error, as indexes cannot be applied to views.
[TARGET_LOAD ]E: RetCode: SQL_ERROR SqlState: 42000 NativeError: 1939 Message: [Microsoft][ODBC Driver 18 for SQL Server][SQL Server]Cannot create index on view 'PPHTRAN' because the view is not schema bound. Line: 1 Column: -1 [1022502] (ar_odbc_stmt.c:5083)
Target views behave differently from tables, but an internal parameter can be used to trigger a manual query. To achieve this, add the $info.query_syntax.create_index internal parameter and value to the SQL Server target endpoint.
SUPPORT-9276
Qlik Replicate 2025.11.0.285 could not read transaction logs properly for the SQL Server source endpoint, causing the following error:
[SOURCE_CAPTURE ]E: Bad Envelope : Lsn=00695591:01394baa:0009,operation=5,TxnId=0006:91821bb4,Tablename=COMMIT,PageId=0000:00000000,slotId=3,timeStamp=2026-02-25T06:20:03.890,dataLen=0, LCX=99, >Invalid data context / LCX Code encountered for TXN operation. [1020203] (sqlserver_log_processor.c:350) 00001580: 2026-02-25T07:35:09 [SOURCE_CAPTURE ]E: Internal error (specific information not available) [20014]
Upgrade to Qlik Replicate 2025.11.0.437 to resolve the read issue for the transaction logs.
SUPPORT-8946
ErrorCode.11041 occurs when opening up an App
ErrorCode.11043 occurs when creating a Database connection in the data load editor.
The two symptoms will correlate with the Qlik Sense system having a restricted or no internet connection.
Qlik connectors are cryptographically signed for authenticity verification. The .NET framework verification procedure used for this signing includes checking OCSP and Certificate Revocation List information, which are fetched from an online resource if the system doesn't have a cached local copy. These requests will timeout due to a lack of access to online resources in environments with restricted, slow or no internet connection. Due to the authenticity check failure, the connector will not run, and the app reload fails.
Edit the .Net Framework's machine.config file
<runtime> <generatePublisherEvidence enabled="false"/> </runtime>If the <runtime> section looks different, modify it to:<runtime>
<OTHER CONFIGURATION="YOU VALUES">
<...>
<generatePublisherEvidence enabled="false"/>
</runtime>
NOTE1: Changes to machine.config affects all software using the .NET framework feature.
NOTE2: 3rd party connectors might be compiled for 32-bit platforms.
In such case repeat steps above for the 32-bit version of the machine.config file;
C:\Windows\Microsoft.NET\Framework\v4.0.30319\config\machine.config