Table of Contents
This section describes the issues that you may encounter when upgrading/migrating to the new version.
If your current Qlik Replicate version is no longer supported, you need to perform two upgrades. First, upgrade to the latest supported version (excluding this one), and then upgrade to this version. If you are unsure what version you need to upgrade to first, contact Qlik Support.
To prevent errors after upgrading, customers with Log Stream tasks running on Replicate 6.4 for Linux should perform the following procedure.
For each of your Log Stream tasks:
! Although unlikely, starting the task from a minute before the time it was stopped may result in duplicate records on the target.
To avoid Control Table Namespace conflicts when running multiple tasks, the Control Table Namespace will now be created without the task name and schema name.
In light of the above, before upgrading, customers who have configured the Kafka target endpoint to Publish data schemas to Hortonworks Schema Registry and who have set the Schema compatibility mode to anything other than None, need to disable the existing Replicate Control Table subjects in the Hortonworks schema registry.
If needed, you can change the default Control Table Namespace as follows:
"task_settings": {
"source_settings": {
},
"target_settings": {
"queue_settings": {
"use_custom_message": true,
"message_shape": {
"control_table_namespace": "MyNameSpace"
}
"use_custom_key": true,
"key_shape": {
5. Save the JSON file and then import it to Replicate using the Import Task toolbar button.
In addition, from Replicate April 2020, the default schema compatibility mode for all Control Table subjects will be None, regardless of how it is defined in the endpoint settings. Should you wish to use the Schema compatibility mode defined in the Kafka endpoint settings, set the setNonCompatibilityForControlTables internal parameter to false.
After upgrading, customers that are using Replicate's self-signed certificate (i.e. instead of their own certificate) should perform the following procedure:
Delete all *.pem files from <replicate_data_folder>/ssl/data.
Restart the Qlik Replicate Server service.
This will cause Replicate to generate a new self-signed certificate, thereby resolving any certificate trust issues when connecting to Replicate Console.
Note that if you do not perform the above procedure, the following error will be encountered when connecting to Replicate Console:
SYS,GENERAL_EXCEPTION,The underlying connection was closed: Could not
establish trust relationship for the SSL/TLS secure channel.
When upgrading a Replicate installation with multiple Data folders, only the default Data folder (<Product_ Dir>\Data) will be automatically upgraded. The other Data folders need to be updated manually by running the following command:
repuictl.exe -d <data_folder_path> setup install
If you are using SAP Application or SAP Application (DB) as a source in a Replicate task, you need to upgrade the SAP transports as follows:
Stop all tasks that have a SAP Application or a SAP Application (DB) source endpoint.
Upgrade to Replicate May 2021.
Upgrade the transports as described in the Replicate Help.
Restart the tasks.
From this release, the BYTES and BLOB Replicate data types will be mapped to BYTES (base64) on Google Cloud BigQuery instead of STRING. After upgrading, only new tables will be created with the updated mapping. Existing tables will not be affected unless they are reloaded on the target.
If you wish to continue using STRING instead of BYTES, either define a data type transformation or manually change the data type for the affected target columns post-replication.
In previous versions, when using the Microsoft Azure SQL Database endpoint, duplicate keys would be ignored without issuing an error. Starting from this version, an error will be returned when duplicate keys are encountered.
If you prefer duplicate keys to be ignored (the previous behavior), please contact Qlik Support.
Qlik Replicate May 2021 is compatible with Qlik Enterprise Manager May 2021 only.
In previous versions, the build number format for Replicate installation kits was N.N.N.<build number> (e.g. 7.0.0.604). From this version, the following date-based format will be used: YYYY.MM.<build number>.
This section describes the new and enhanced features introduced in Replicate May 2021.
Newly Supported Source Endpoints
The following source endpoints are now supported:
Newly Supported Target Endpoints
Replicating to a Microsoft Azure SQL Managed Instance is now supported (via the existing Microsoft SQL Server target endpoint).
SAP HANA Log-based CDC
As an alternative to the existing Trigger-based CDC, the SAP HANA source endpoint's new Log-based CDC option now enables changes to be captured directly from encrypted or unencrypted logs. When using Log-based CDC, SAP HANA can also be used as a backend database with the SAP Application (DB) source endpoint.
Depending on their environment and corporate security policies, customers can either provide the encryption root keys manually (suitable for rarely changing encryption root keys) or instruct Replicate to retrieve them automatically during runtime (suitable for frequently changing encryption root keys, but requires the ENCRYPTION_ROOT_KEY_ADMIN permission).
SAP Application (DB) source endpoint enhancements
Starting from this version, the following endpoints can now be used with the SAP Application (DB) source endpoint as backend databases:
Support for Google Cloud BigQuery clustered tables
A Create tables as clustered by primary key option has been added to the Advanced tab of the Google Cloud BigQuery target endpoint. When this option is selected, the target tables will be created as clustered (according to the first four Primary Key columns that support clustering). In general, clustered tables usually provide significantly faster query performance as well as reducing billing costs.
Kafka target endpoint enhancements
Users can now set a Subject Name Strategy when publishing to Confluent Schema Registry, and access the Confluent Schema Registry via a proxy server.
Subject name strategy support
Kafka endpoint users can now select a subject name strategy when publishing to Confluent Schema Registry.
The following subject name strategies are available:
! The first strategy (Schema and Table Name Strategy) is a proprietary Qlik strategy while the other three are standard Confluent subject name strategies.
Proxy support
This version introduces support for accessing the Confluent Schema Registry via a proxy server.
Salesforce source endpoint enhancements
The following options have been added to the Advanced tab of the Salesforce source endpoint:
Microsoft Azure SQL Database - Active Directory authentication
Support for connecting to Microsoft Azure SQL Database using Active Directory authentication has been added.
Support for "Start from timestamp" when using the ODBC with CDC source endpoint
In previous versions, the "Start from timestamp" run option was not supported with the ODBC with CDC source endpoint. From this version, the "Start from timestamp" run option is supported if there is a single context column defined in the Change Processing tab, and its type is TIMESTAMP.
Renaming MemSQL target endpoint to SingleStore
To reflect the change to the company name (MemSQL to SingleStore), the MemSQL target endpoint has been renamed to SingleStore.
Handling of Computed Columns when using Microsoft SQL Server- based sources
In previous versions, replication of computed columns from Microsoft SQL Server-based sources (Microsoft SQL Server, Amazon RDS for SQL Server, and Microsoft Azure SQL Managed Instance) was supported in Full Load tasks only. During change processing, any computed columns would be populated with NULL values on the target. This caused issues when the source table column was defined as non-nullable. Consequently, from this version, during change processing, any tables with computed columns will be suspended. If you need to run an Apply Changes and/or Store Changes task that captures changes from tables with computed columns, you should define a transformation to exclude such columns from the task.
Global rules - transformations and filters
This version introduces significant improvements to the global transformations module. Global transformations allow users to manipulate source data and metadata across multiple tables (in the same task) before it reaches the target. To reflect the new global filtering capabilities, the "Global Transformations" feature has been renamed to "Global Rules". Customers can now define multiple transformation and/or filters that will be executed in their predetermined sequence.
Revamped user interface
The user interface had been redesigned to accommodate the new filtering functionality. In addition to enabling users to define both transformations and filters, users can also set the rule execution sequence (using the Up-Down arrows) and activate/deactivate rules as required.
New global filtering capability
Users can now use the Global Filter Wizard to filter all source records based on column data and/or record attributes. The following filtering options are available:
New transformation: Replace column value
Use the Replace column value transformation to replace the values in the source columns (set in the Transformation scope) with different values in the corresponding target columns.
New metadata variables
The following metadata variables can now be used in global rules:
The following data variables can now be used in global rules:
!
New "replaceChars(X,Y,Z)" function in the Expression Builder
The new replaceChars(X,Y,Z) function replaces any character in string X that also exists in string Y (characters to be replaced) with Z (replacement characters) in the same position. This is especially useful for removing non-valid characters from paths and file names.
So, for example, specifying replaceChars("abcde","abcd","123") would return 1231e.
Renamed wizard screens
In the Global Transformation Wizard, the following screens have been renamed:
Old Name |
New Name |
Which Global Transformation? |
Transformation Type |
What to transform? |
Transformation Scope |
How to transform? |
Transformation Action |
New header columns
The following header columns can now be included in transformations:
!
* Relevant for the following source endpoints only: Oracle, Microsoft SQL Server, IBM DB2 for z/OS, Microsoft Azure SQL Managed Instance, and Amazon RDS for SQL Server.
* The AR_H_DB_COMMIT_TIMESTAMP header effectively replaces the use_ backend_local_time_in_ct_table_timestamp internal parameter, which is no longer supported.
Relevant for the IBM DB2 for iSeries endpoint only
Enhancements to the "Apply changes using SQL MERGE" option
Security hardening
Automatic disabling of the passthrough filter
The passthrough filter allows task designers to control SQL statements executed on source tables during replication. From this version, as part of security hardening, customers will need to explicitly authorize the use of passthrough filters if they wish to continue using them.
After upgrading to this version, any tables with passthrough filters in replication tasks will be suspended and a warning will be issued. If you fully trust the replication task designer, you will then be able to re-enable passthrough filters by setting “enable_passthrough_filter” to “true” in the <product_dir>\bin\repctl.cfg file.
For best security it is recommended to avoid using passthrough filters in replication tasks. If you are unsure about what to do, please contact Qlik Support.
Hiding customer data fragments in verbose logging
This version introduces a new configuration parameter that controls whether customer data fragments are written to the log files when the logging level is set to "Verbose" (the most detailed). By default, customer data fragments are not written to the log files, but may sometimes be required in order to troubleshoot esoteric replication issues. In such a case, Qlik Support will provide you with instructions for enabling the new parameter.
Support for LOB Column Replication in UPSERT Error-Handling Mode
In previous Replicate versions, the Apply Conflicts "No record found for applying an UPDATE: Insert the missing target record" error-handling option did not support replication of LOB columns (even when the task’s Replicate LOB columns option was enabled). From this version, Replication of LOB columns when this option is set is now fully supported.
Table reload information in the Change Data Partitions Control Table
In previous versions, when the Change Data Partitioning and Speed partition mode options were enabled, Replicate would add Full Load partition information to the Change Data Partitions Control Table whenever a table was reloaded (as shown in the image below). From this version, this information will be added to the Change Data Partitions Control Table whenever the Change Data Partitioning option is enabled, without needing to enable the Speed partition mode option as well.
Enhanced Kerberos support
In previous versions, customers who wanted to use Kerberos authentication needed to perform tedious manual workarounds to resolve conflicts between Kerberos artifacts installed on their machines and the Kerberos artifacts installed with Replicate. Starting from this version, Replicate is provided with fully functioning Kerberos libraries and utilities, thereby eliminating the need for such workarounds.
Support for build-specific or environment-specific features
This version introduces support for setting build-specific or environment-specific features. As these features are environment-specific, they do not appear as standard options in the user interface. Consequently, they should only be set if explicitly instructed by Qlik Support or product documentation.
These features can be set by clicking More Options in the following places:
Non-nulling of before-image values when using CDC headers
In previous versions, when defining transformations for replication tasks that store changes (in Change Tables or Audit Tables), transformations that leveraged CDC headers (such as using the User ID header to prefix a "UID" string to user IDs) would always result in a NULL value in the before-image. The default now is that the before-image values will no longer contain NULL values in such scenarios.
This section provides information about End of Life versions, End of Support features, and deprecated features.
The following target endpoin tversions are no longe rsupported:
The following source endpoint versions are no longer supported:
To optimize the replication process and minimize connectivity issues, support for old driver versions has been discontinued for several endpoints. For information about which drivers are currently supported for a particular endpoint, refer to the "Prerequisites" section for that endpoint in the Replicate Help.
Endpoints
The following endpoint versions will be deprecated in the Replicate November 2021 release:
Internet Explorer 11
Support for Internet Explorer 11 will end with the release of Replicate November 2021.
The following source endpoint versions are now supported:
The following target endpoint versions are now supported:
Process | Description | Ref # |
Connecting to an endpoint using OpenSSL when Replicate is installed on Linux | When clicking Test Connection in the endpoint settings or attempting to run a task, Replicate might not find the OpenSSL Trusted Certificates. In such a case, the following error will be encountered (excerpt): SSL routines:tls_process_server_certificate:certificate verify failed:unable to get local issuer certificate] Workaround: 1. Stop the Replicate service by running the following command: <REPLICATE_INSTALLATION_DIR>/bin/instancename stop Example: /opt/attunity/replicate/bin/areplicate stop 2. Edit the <REPLICATE_INSTALLATION_DIR>/bin/site_ arep_login.sh file and set the OpenSSL environment variable to point to the correct Trusted Certificates location. Example: export SSL_CERT_FILE=/etc/pki/tls/cert.pem 3. Start the Replicate services by running the following command: <REPLICATE_INSTALLATION_DIR>/bin/instancename start Example: /opt/attunity/replicate/bin/areplicate start |
RECOB- 2675 |
Hadoop High Availability Configuration | When working with a Hadoop High Availability configuration, when a failover occurs during Full Load, the Replicate task does not resume the interrupted load from the Active node. | RECOB- 2132 |
Google Cloud BigQuery Target | When a connection error occurs and Replicate recovers the task automatically, the reported number of records replicated during Full Load might sometimes differ from the actual number. | RECOB- 2322 |
Microsoft Azure SQL (MS-CDC) Source Endpoint | The Microsoft Azure SQL (MS-CDC) source endpoint is not currently supported, even though it appears in the list of selectable endpoint types. | N/A |
Component/Process | Description | Ref # |
IBM DB2 for z/OS Source | When a table was suspended during change processing due to failed parsing, numerous decompression warnings would continue to be reported even though the table was suspended. | N/A |
IBM DB2 for z/OS Source | Changes would sometimes not be processed from SAP on IBM DB2 for z/OS tables and the following error would be encountered: DB2z utility (subtype 83) variation 33 (UNIDENTIFIED) at LSN=00D87522D19FE121CA00 was detected for table 'SAPR3'.'MARA'. Operation is ignored (not suspended) based on the endpoint configuration (db2z_endpoint_capture.c:2598) Additional logging was added to assist in troubleshooting the issue. |
2053198 |
IBM DB2 for z/OS Source | When encountering an error caused by the DB2 session being closed by DB2 Manager, the task would fail with a fatal error instead of failing with a recoverable error. | N/A |
IBM for DB2 z/OS Source | When encountering an ODBC problem during Change Capture, the task would stop with a fatal error instead of recovering. | N/A |
IBM for DB2 z/OS Source | When performing a REORG, the following redundant error would sometimes be encountered (excerpt): DB2z utility (subtype 83) variation 33 (UNIDENTIFIED) at LSN <LSN> was detected for table <name> |
N/A |
IBM DB2 fro z/OS Source | When the user did not have permission to access the SYSLGRNX table, the task would enter an infinite loop. Now, in such a situation, it will switch to binary search. | N/A |
IBM DB2 for z/OS Source | In rare scenarios, the task would fail to start from a timestamp. | 2126911 |
IBM DB2 for z/OS Source | When a failure occurred with row decompression, Replicate would retry the task numerous times, resulting in a large backlog of events. | N/A |
IBM DB2 for Z/OS Source to Teradata Target | When a source table contained binary columns, applying changes to Teradata in Transactional Apply mode would result in errors when parsing changes on IBM DB2 for Z/OS. | N/A |
IBM DB2 for z/OS Source | Added an option to change the DB2z UDTF log severity level while the task is running. | 2138502 |
SAP HANA Source | Records copied from the trigger table to the log table would not be rolled back (from the log table) in the event of a failure to delete the record from the trigger table. Although rare, this would sometimes cause duplicate records on the target. |
2069250 |
SAP HANA source | Capturing changes from tables with primary keys of type VARBINARY would consume excessive memory and take a long time to complete. | 2134587 |
Google Cloud BigQuery Target | The MERGE command would fail for tables with multiple Primary Keys. | N/A |
Microsoft SQL Server Source | On rare occasions, when the Alternate backup folder and Replicate had file-level access to the backup log files endpoint settings were configured, Change Processing tasks would enter an infinite loop when capturing changes from a Microsoft SQL Server database that was configured to use FILESTREAM. | N/A |
Microsoft SQL Server Source Transformations |
When using the AR_H_USER header column to filter transactions based on User ID, the User ID would not be propagated to the column. As this issue only occurs in the customer's environment, additional logging was added to assist in troubleshooting the cause. |
2047815 |
Microsoft SQL Server Source | In rare situations, when parsing a compressed row with a structure that did not correspond to the current table definition, the task would fail. | 2094674 |
Microsoft SQL Server Source | Added support for device type 9 (Azure storage) when the Select virtual backup device types option is enabled in the endpoint settings. | 2088559 |
Microsoft SQL Server Source | When the Replicate has file-level access to the backup log files option was enabled, attempting to read a transaction log backup that was restored from Commvault would fail with the following message (shown in verbose logging mode): [SOURCE_CAPTURE ]V: SFMB at offset 0x0000000000010000 (sqlserver_drd_mtf_map.c:817) |
2084175 |
Salesforce Source | During Change Processing, changes to tables excluded from the replication task would sometimes prevent the stream position from being updated. This would delay task resumption after recoverable errors. | 2070129 |
Salesforce Source | The connection to Salesforce would reset after a while due to the buffer size being exceeded. The issue was resolved by increasing the buffer size and adding an internal parameter to allow customers to further increase the buffer size if needed. |
2074357 |
SAP Application (DB) | In rare scenarios, when replicating from the BSEG cluster table, the task would fail with the following recoverable error: [STREAM_COMPONENT ]E: Too many 'After Image' entries in BLOB's sequence |
2034536 |
SAP Application (DB) Source over Oracle 19c | The following error would occasionally be encountered when parsing the online redo logs (excerpt): The field 'MANDT' doesn't exist in the CDC record for table 'R4S - SALES TRANSACTION'.'LIPS' [1023706] (sapdb_endpoint_data_record.c:730) Added logging to try and pinpoint the cause of the error. |
2090257 |
SAP Application (DB) | Full Load of SAP cluster tables without a MANDT column would fail with the following error: Decompress failed for Cluster table BLOB |
2056366 |
SAP Application (DB) | Primary Key columns of the SAP pool table would not be parsed correctly when they contained multibyte characters. | 2032276 |
SAP Application (DB) with an Oracle Backend | In rare scenarios, when using Replicate Log Reader to capture changes from wide Advanced Row compressed tables, and the task was configured to perform transformations, the task would sometimes fail. | 2157247 |
SAP Application (DB) with an Oracle Backend | When a backend database table of one of SAP tables could not be accessed (for instance, due to insufficient permissions), the task would stop unexpectedly during the Full Load process. | 2155526 |
SAP Application SAP Application (DB) Task Manager | Tasks with a SAP Application (DB) or SAP Application source that were configured as Apply Changes only, would start as Full Load and Apply Changes. | 2123964 |
SAP Application (DB) with Oracle backend | When a recoverable error occurred due to an Oracle connection error, changes would sometimes not appear on the target when the task resumed. | 2139031 |
SAP Application (DB) with an Oracle Backend | When using Replicate Log Reader to capture changes from wide Advanced Row compressed tables, and the task was configured to perform transformations, the task would sometimes fail with the following error (excerpt): The field 'MANDT' doesn't exist in the CDC record |
2110294 |
SAP Application (DB) | When a task is stopped and resumed (or after a recoverable error) and the first transaction that is resent to the target contains cluster table changes, some of the changes in that transaction would sometimes not appear on the target. | 2139031 |
SAP Application (DB) | In rare cases, pool table data columns would be replicated incorrectly. | 2128316 |
Oracle Source - Replicate Log Reader | In rare scenarios, Oracle encrypted columns would be replicated as NULL. Added logging to Oracle Column TDE processing to try and pinpoint the cause of the issue. |
2040894 |
Oracle Source - Replicate Log Reader | In rare situations, DELETE and INSERT operations would not be captured as a result of erroneous parsing of "SPLIT UNDO" operations. | 2086367 |
Oracle Source - Replicate Log Reader | In Oracle 12.1 only, when an OLTP compressed chained row was updated, the affected table would be suspended. | 2074822 |
Oracle Source - Replicate Log Reader | In rare situations, tasks using transformations would sometimes fail on the target due to an issue with the Oracle Deferred Constructor. | 2092514 |
Oracle Source - Replicate Log Reader | In a rarely encountered sequence of redo log events, the task would fail if a transformation was defined for a table with advanced compression. | 2103160 |
Oracle Source - Replicate Log Reader | In rare scenarios, several transactions with the same partial transaction ID would be captured as one transaction, resulting in missing UPDATEs on the target. | 2107559 |
Oracle Source - Replicate Log Reader | In very rare situations, redo log events would be captured in the wrong order, resulting in inconsistent data on the target. | 2135481 |
Oracle Source | DATE columns would be truncated during Full Load when the bindDateAsBinary internal parameter was set to false. | 2118265 |
Oracle Source | The task would sometimes crash during change capture from a STANDBY environment when the Primary Oracle environment used RAC. | 2066583 |
Security | When using Verbose logging, information about DDLs would sometimes contain user passwords. The problem was resolved by excluding such information from the log. | 2096218 |
Log Stream | When starting a replication task from timestamp, Replicate would sometimes fail to close the log stream file after searching for the start position. This would prevent the log stream task from continuing. | 2067415 |
Log Stream | When using the Log Stream component, the following error would sometimes be encountered: E: use_backend_local_time_in_ct_table_timestamp is true, but has_source_time_diff is false [1000100] (store_changes.c:1358) The issue was resolved by adding support for the backend commit timestamp when using Log Stream. |
2050262 |
SAP Extractor Source | When installing the SAP Extractor transport on a new SAP environment, a missing object error would be encountered. | N/A |
SAP Extractor Source | When reloading a Full Load only task, the first of the custom extractors would fail to run. | 2103568 |
SAP Extractor Source | Some strings in the target would be missing the last two characters of the original value. In addition, Embedded spaces in string values would be stripped on the target. |
2107660 |
SAP Extractor Source | Certain numeric values on the target would be 1/100th of their value on the source. For example, the value of the ‘NETPR’ field in SAP would be 4823 while the value in Databricks would be 48.23. |
2107657 |
SAP Extractor Source | Due to a failure to clear the delta queue, the same changes would be returned every time the extractor delta process ran. | 2106924 |
SAP Extractor Source | In rare scenarios, locks that were created in the ABAP code would not be released properly. | N/A |
SAP Extractor Source | SAP DEC data type columns mapped to REAL8 data type columns would not retain high precision values. | 2121616 |
MySQL Source | When the SSL Mode option was set to Required in the endpoint settings, Replicate would attempt to establish and unsecured connection if a server certificate was not found. | 2075672 |
Amazon RDS for MySQL Source | When replicating from Amazon RDS for MySQL, the task log would contain the following info message numerous times: 00008500: 2021-01-23T01:21:25 [SOURCE_CAPTURE ]I: >>> Unsupported or comment DDL: '# Dummy event replacing event type 160 that slave cannot handle. ' (mysql_endpoint_capture.c:1703) With the fix, the message will only be shown once at info level. At trace level, the message will continue to be shown as the event occurs. |
2135640 |
Microsoft Azure Synpase Analytics Target | Unicode characters would not be replicated correctly into wide columns in Transactional Apply mode. The issue was fixed using an internal parameter. |
2098897 |
Microsoft Azure Synapse Analytics Target Graphical User Interface |
It was not possible to set a value in the port field in the Advanced tab. | 2141421 |
IBM DB2 for iSeries | The following redundant warning would sometimes be reported for tables that were not included in the replication task (excerpt): DROP/RENAME TABLE commands are not currently supported |
2047709 |
Aurora for PostgreSQL Source | When resuming a task, the following fatal error would sometimes occur due to redundant checks: The first begin LSN '00000CCF/F266E830' is higher of stream position LSN '00000CCF/F266D340'. Tables must be reloaded. |
2113054 |
PostgreSQL-based Sources | The WAL slot would constantly grow resulting in degraded performance. | 2117811 |
PostgreSQL-based Sources | Excessive memory consumption would sometimes be encountered during change processing. | 2114577 |
PostgreSQL-based Sources | In rare scenarios, when a large transaction and a small transaction are captured simultaneously, stopping and resuming the task after the small transaction was applied but before the second (large) transaction was processed, would result in missing changes. | 2117811 |
PostgreSQL-based Sources | In rare scenarios, when two transactions occurred while a task was stopped (one small and one large), when the task was resumed, the task would fail with following error: The first event LSN '00000005/9AFA3D38' is higher of stream position LSN '0000000 |
N/A |
PostgreSQL Source | Resuming a task from LSN would sometimes not work. | N/A |
PostgreSQL Target | When using the internal parameter psqlReadCommandsFromFile, tables or schemas with capital letters would not be replicated. | 1943391 |
ARC Source Endpoints | Starting a task with an ARC-based endpoint on Linux, would sometimes fail with a recoverable error as the process was unable to determine the correct start position in the CSV file. | 2112138 |
AIS Source | Restarting a task from timestamp would sometimes not be performed correctly with some OS platforms. | 2112138 |
Graphical User Interface | Tasks configured with Character Substitution could not be saved. | 2124045 |
Graphical User Interface | Restored the capability to connect to S3 via a proxy server to the endpoint settings. | 2128844 |
Teradata Target | In rare scenarios, the batch optimized apply operation would fail to delete the Replicate net changes table with the following error: NativeError: -2652 Message: [Teradata][ODBC Teradata Driver][Teradata Database](-2652)Operation not allowed |
2086486 |
Microsoft SQL Server Source | The task would sometimes fail when replicating tables with a large number of partitions as a none sysadmin user. | 2112203 |
Microsoft SQL Server Source | The Microsoft SQL Server source endpoint would incorrectly report very high latency. | N/A |
Tasks | When stopping or starting a task, there would sometimes be a prolonged delay before the task status changed. | 2139808 |
Engine | Importing a task with a transformation would sometimes set an incorrect table status, causing the task to reload the table unnecessarily. | 2128345 |
Engine (CDC) | The following warning would flood the INFO logging, causing increased latency (excerpt): [INFRASTRUCTURE ]W: The transaction timestamp already exists in an earlier partition. |
2065627 |
Engine | When the Metadata only run option was enabled, the parallel load feature would replicate data as well. | 2168808 |
Kafka target | When using the Confluent Schema Registry with ACL authorization, the following error would be encountered when running the task: error code 40301: 'User is denied operation ReadCompatibility on this server The issue was resolved using an internal parameter that bypasses the configs URL call (thereby eliminating the need for super-user permission). |
N/A |
Kafka Target | Target latency would increase when the last operation captured from the source was a DDL. | N/A |
Kafka Target | When processing multiple messages simultaneously, storing the messages in memory while waiting for ACKs from the broker servers would sometimes result in excessive memory consumption on the Replicate Server machine. | N/A |
Microsoft Azure Synapse Analytics Target | Blob storage folder validation would fail with folders containing upper-case letters. | 2155808 |
Google Cloud BigQuery Target | Loading data into Google Cloud BigQuery would sometimes not complete successfully. | 2175799 |
Oracle Target | In rare scenarios, wide CLOB and CHAR values with non-ASCII characters would not be replicated correctly. | 2178291 |
Microsoft Azure Event Hubs Target | The task would sometimes "detach" from the target with a recoverable error, and take a long time to reconnect. | 2155574 |
AIS Source - IMS | The IMS PSB name could not be used as the User Name in the Replicate header field. | RECOB- 2325 |
Microsoft Azure MySQL Source | The following redundant message would sometimes be returned when replicating columns of type JSON: Col <name> will be skipped since JSON data type is not supported |
2161492 |
IBM DB2 for z/OS Source | When a failure occurred with row decompression, Replicate would retry the task numerous times, resulting in a large backlog of events. | 2163452 |
Databricks Delta Target | When replicating large tables, Full Load would sometimes fail with a query timeout error. | 2117837 |
These release notes provide details on the resolved issues and/or enhancements included in this patch release. All patch releases are cumulative, meaning that they include the fixes and enhancements provided in previous patch releases.
Jira issue: RECOB-4358 | Description: Fixes critical vulnerabilities (CVE-2021-45105, CVE-2021-45046, CVE-2021-44228) that may allow an attacker to perform remote code execution by exploiting the insecure JNDI lookups feature exposed by the logging library log4j. The fix replaces the vulnerable log4j library with version 2.16. |
Salesforce case: NA | |
Type: Issue | |
Component/Process: Security | |
Jira issue: RECOB-4421 | Description: Importing the SAP Extractor transport into newer versions of S4HANA systems would fail. |
Salesforce case: 16640 | |
Type: Issue | |
Component/Process: SAP Extractor Source | |
Jira issue: RECOB-4382 | Description: Replicating from MySQL 8 with the Allow unlimited LOB size task setting enabled, would corrupt the data types and suspend the tables after CDC. |
Salesforce case: 14781 | |
Type: Issue | |
Component/Process: MySQL Source | |
Jira issue: RECOB-4194 | Description: Timeouts would sometimes occur when reading from SQL Server system tables. |
Salesforce case: 8272 | |
Type: Issue | |
Component/Process: Microsoft SQL Server Source | |
Jira issue: RECOB-4237 | Description: In a MySQL homogenous task, if a source table contained a Unique Index with nullable columns but did not have a Primary Key, the table would be suspended. |
Salesforce case: 9224 | |
Type: Issue | |
Component/Process: MySQL Homogenous |
|
Jira issue: RECOB-4328 | Description: During CDC, negative Oracle Number values with a precision greater than 38 would be replicated with trailing zeros. |
Salesforce case: 16038 | |
Type: Issue | |
Component/Process: Oracle Source | |
Jira issue: RECOB-4311 | Description: Tasks would abend when the target table contained a column missing from the metadata. |
Salesforce case: 15612 | |
Type: Issue | |
Component/Process: Google Cloud BigQuery Target | |
Jira issue: RECOB-4284 | Description: After resuming a task, the Header timestamp would be incorrectly inserted as '1970-01-01 00:00:00.000'. |
Salesforce case: 14022 | |
Type: Issue | |
Component/Process: PostgreSQL Source | |
Jira issue: RECOB-4295 | Description: An authorization error would occur when trying to run SAP Extractor endpoint with transport version 900260. |
Salesforce case: 12949 | |
Type: Issue | |
Component/Process: SAP Extractor Source | |
Jira issue: RECOB-4127 | Description: Replicate would fail to parse an event and issue the following warning: "Number of fields retrieved (XX) does not match metadata (YY)". |
Salesforce case: N/A | |
Type: Issue | |
Component/Process: AIS Source | |
Jira issue: RECOB-4249 | Description: When rare UPDATE events occurred, the transaction ID and stream position would not be replicated correctly. |
Salesforce case: 5818 | |
Type: Issue | |
Component/Process: IBM DB2 for LUW Source | |
Jira issue: RECOB-4224 |
Description: When working in Batch Optimized Apply mode with the Apply batched changes to multiple tables concurrently option, an error would occur when uploading the attunity_attunity_test.txt file (used to check the connection). |
Salesforce case: 00010938 | |
Type: Issue | |
Component/Process: Amazon Redshift | |
Jira issue: RECOB-4063 | Description: Added extra logging to gather information about the Oracle instance configuration. |
Salesforce case: N/A | |
Type: Enhancement | |
Component/Process: Oracle Source | |
Jira issue: RECOB-4214 | Description: In rare scenarios, different tasks would capture the same events with different stream positions and apply them to the Change Table . |
Salesforce case: N/A | |
Type: Issue | |
Component/Process: SAP Application (DB) | |
Jira issue: RECOB-4187 | Description: Defining the $AR_H_USER header variable (which is not supported with SAP HANA) in a transformation resulted in the table to being suspended. |
Salesforce case: 9982 | |
Type: Issue | |
Component/Process: SAP HANA Source | |
Jira issue: RECOB-4111 | Description: The SQL Server safeguard commit would time out. |
Salesforce case: 2260579 | |
Type: Issue | |
Component/Process: Microsoft SQL Server Source |
Jira issue: RECOB-4058 | Description: When exceeding the maximum buffer size in Salesforce, a buffer exceeded error would occur (as expected), but each retry attempt would be considered as if it was the first. This would result in an endless loop of reconnection attempts, all failing with the same error. |
Salesforce case: 4582 | |
Type: Issue | |
Component/Process: Salesforce Source | |
Jira issue: RECOB-4131 |
Description: Change capture would fail when a structure with a name exceeding 16 characters was added to the table being captured. |
Salesforce case: 4113 | |
Type: Issue | |
Component/Process: SAP Application (DB) Source | |
Jira issue: RECOB-4135 | Description: When using an Oracle backend database, key date columns would be wrongly replicated when capturing partial DELETE operations. |
Salesforce case: 4702 | |
Type: Issue | |
Component/Process: SAP Application Source | |
Jira issue: RECOB-4076 | Description: Parsing TIMS values with special or non-valid characters would cause the Replicate task to stop with a conversion error. |
Salesforce case: 7139 | |
Type: Issue | |
Component/Process: SAP Extractor Source | |
Jira issue: RECOB-4078 | Description: In rare scenarios, when working in Full Load only replication mode, several copies of the same table would be loaded. |
Salesforce case: 8572 | |
Type: Issue | |
Component/Process: SAP Extractor Source | |
Jira issue: RECOB-4080 | Description: Currency columns without decimal digits would be converted incorrectly, causing the task to stop with an error. |
Salesforce case: 9249 | |
Type: Issue | |
Component/Process: SAP Extractor Source | |
Jira issue: RECOB-4083 | Description: An UPDATE operation on a long data type value would sometimes cause the task to stop abnormally or suspend the associated table. |
Salesforce case: 4707 | |
Type: Issue | |
Component/Process: Oracle Source | |
Jira issue: RECOB-4049 | Description: In Batch Optimized Mode, LOB lookup would not find a row when a DELETE operation and an INSERT operation were combined into a single UPDATE operation. |
Salesforce case: 10044 | |
Type: Issue | |
Component/Process: Batch Optimized Apply |
|
Jira issue: RECOB-4002 | Description: When setting the Database timezone parameter in the endpoint's Advanced tab, timestamp values around the time of a DST switch would not be migrated correctly. |
Salesforce case: 2230641 | |
Type: Issue | |
Component/Process: MySQL Source | |
Jira issue: RECOB-3872 | Description: If Replicate services were stopped before upgrading to version 21.5, the upgrade process would exit with an error. |
Salesforce case: 2230521 | |
Type: Issue | |
Component/Process: Replicate Installation | |
Jira issue: RECOB-3935 | Description: Added an internal parameter to prevent connections from being reset during Full Load. When the parameter is set to “true”, Replicate will write Salesforce batches to intermediate files before sending them to the target endpoint. |
Salesforce case: 2255408 | |
Type: Feature | |
Component/Process: Salesforce Source | |
Jira issue: RECOB-3945 | Description: When at least one of the captured tables contained CLOB columns, ambiguous double quotes would be added to some of the data. This would result in character columns being truncated if they were too long. |
Salesforce case: 2235560 | |
Type: Issue | |
Component/Process: Snowflake Target | |
Jira issue: RECOB-3874 | Description: A warning about duplicate tables would be shown even though the parameter to suppress such warnings was set. |
Salesforce case: 2264084 | |
Type: Issue | |
Component/Process: Task Manager |
Jira issue: RECOB-3930 | Description: The SAP Extractor endpoint would fail to process the SAP INT1 data type with values over 127. |
Salesforce case: 2257690 |
|
Type: Issue | |
Component/Process: SAP Extractor Source | |
Jira issue: RECOB-3883 | Description: When a CSV file could not be loaded, missing events would occur instead of reattach. |
Salesforce case: 2255503 | |
Type: Issue | |
Component/Process: Google Cloud BigQuery Target | |
Jira issue: RECOB-3886 | Description: Capture of changes to tables with long names was enabled using an internal property. |
Salesforce case: 00007622 | |
Type: Enhancement | |
Component/Process: Oracle Source | |
Jira issue: RECOB-3881 | Description: Improved DO4GRANT JCL |
Salesforce case: | |
Type: Enhancement | |
Component/Process: IBM DB2 for z/OS Source | |
Jira issue: RECOB-3823 | Description: Improved Replicate SAP Extractors ABAP filtering |
Salesforce case: 2169999 | |
Type: Enhancement | |
Component/Process: SAP Extractor Source | |
Jira issue: RECOB-3857 | Description: Replication of SAP Catalog tables DD02L and DD03L would fail. |
Salesforce case: 2267543 | |
Type: Issue | |
Component/Process: SAP Application (DB) Source | |
Jira issue: RECOB-3861 | Description: The DROP command would fail for inactive tables. |
Salesforce case: 2264657 | |
Type: Issue | |
Component/Process: SAP Extractors Source | |
Jira issue: RECOB-3833 |
Description: UTF16 character conversions would not work for codes ending in 3F (e.g. 2A3F, 1E3F, etc.). |
Salesforce case: 2241338 | |
Type: Issue | |
Component/Process: Character Substitution | |
Jira issue: RECOB-3537 | Description: The header__timestamp would sometimes contain an invalid date (1969-12-31 23:59:59.'00000"), resulting in failed inserts on the target. Logging was added to assist with troubleshooting in the event that the fix does not resolve the issue. |
Salesforce case: 2199292 | |
Type: Issue | |
Component/Process: Microsoft QL Server Source | |
Jira issue: RECOB-3797 | Description: Added the option to create multiple tables with the same name on target endpoints that do no support schemas. The option is enabled with a use of a common setting property. |
Salesforce case: 2264084 | |
Type: Issue | |
Component/Process: Metadata Manager | |
Jira issue: RECOB-3777 | Description: After upgrading the database to version 19c, tasks defined with a transformation would crash when capturing an UPDATE on an advanced compressed table. |
Salesforce case: 2239919 | |
Type: Issue | |
Component/Process: Oracle Source - Replicate Log Reader | |
Jira issue: RECOB-3781 | Description: Added logging to assist with troubleshooting CDC parsing errors. |
Salesforce case: 2251636 | |
Type: Issue | |
Component/Process: MySQL Source | |
Jira issue: RECOB-3414 | Description: An online status report was added for alerting about database lock issues. |
Salesforce case: 2206746 | |
Type: Enhancement | |
Component/Process: IBM DB2 for z/OS Source |
Jira issue: RECOB-3786 | Description: The timestamp of a captured DDL would be 1970-01-01 00:00:00. |
Salesforce case: 2260088 | |
Type: Issue | |
Component/Process: Amazon RDS PostgreSQL SOURCE | |
Jira issue: RECOB-3744 | Description: Added an internal property for exceeding the 10MB message size limitation when loading AVRO-formatted messages into Kafka. |
Salesforce case: 2240034 | |
Type: Enhancement | |
Component/Process: Kafka | |
Jira issue: RECOB-3679 | Description: In a Hybrid Data Delivery task, the change_sequence in the CT table would be different from the task_audit. |
Salesforce case: N/A | |
Type: Issue | |
Component/Process: Server | |
Jira issue: RECOB-3678 | Description: Replicate would fail to capture some of the changes after a Hybrid Data Delivery task recovered from a crash. |
Salesforce case: N/A | |
Type: Issue | |
Component/Process: Engine | |
Jira issue: RECOB-3738 | Description: When working in Transactional Apply mode or when switching to a one-by-one process, Oracle SQL statements exceeding 32K would wrongly cause ORA-01460 to be interpreted as a recoverable error instead of a data error. |
Salesforce case: N/A | |
Type: Issue | |
Component/Process: Oracle Target | |
Jira issue: RECOB-3645 | Description: When replicating from an Oracle Standby database via a Log Stream staging task, the internal parameter for determining how long to wait for changes to be applied to the standby database would not work properly. This would result in missing records on the target. |
Salesforce case: 2232270 | |
Type: Issue | |
Component/Process: Oracle Source, Log Stream |
Jira issue: RECOB-3056 Salesforce case: 2115294 | Description: Added support for providing additional FTS (File Transfer Service) connection string properties. |
Type: Enhancement |
|
Component/Process: File Channel | |
Jira issue: RECOB-3651 | Description: After a recoverable error, the task would stop with a fatal error when reattaching the target component. |
Salesforce case: 2227947 | |
Type: Issue | |
Component/Process: Microsoft Azure Synapse Analytics Target | |
Jira issue: RECOB-3629 | Description: UPDATEs to the target would fail when replicating a table with advanced compression. |
Salesforce case: 2243866 | |
Type: Issue | |
Component/Process: Replicate Log Reader | |
Jira issue: RECOB-3641 | Description: When Use direct path full load was enabled in the endpoint settings, columns of DATE type would not be loaded to the target. |
Salesforce case: 2245555 | |
Type: Issue | |
Component/Process: Oracle Target | |
Jira issue: RECOB-3273 | Description: When capturing changes from wide tables (tables with a large number of columns), the data of all target columns except for the Primary Key /Unique Index segment would be NULL. |
Salesforce case: N/A | |
Type: Issue | |
Component/Process: RDS SQL SERVER Source |
Jira issue: RECOB-3360 | Description: When working with the Qlik Cloud Landing profile and a Snowflake target endpoint, Replicate will create an audit event report every 60 seconds about the last change sequence applied to the Changes Tables as well as a list of tables that were changed. HDD which then apply those changes to the storage app. |
Salesforce case: N/A | |
Type: Enhancement | |
Component/Process: Snowflake target + Hybrid Data Delivery (HDD) |
Jira issue: RECOB-3624 | Description: The AR_H_USER header column was missing from the Transformation dialog's header list. |
Salesforce case: N/A | |
Type: Issue | |
Component/Process: SAP Application (DB), SAP Application | |
Jira issue: RECOB-3491 | Description: The max row id selection performance has been improved. |
Salesforce case: 2207449 | |
Type: Enhancement | |
Component/Process: Sap Hana Trigger-Based Source | |
Jira issue: RECOB-3588, RECOB-3590, RECOB-3592, RECOB-3596, RECOB-3597, RECOB-3594, RECOB-3591, RECOB-3595 | Description: A memory leak would occur when using the listed source endpoints. |
Salesforce case: 2237164 | |
Type: Issue | |
Component/Process: Oracle Source, DB2Z Source, DB2 LUW Source, Microsoft SQL Server Source, Sybase ASE Source, File source, IBM for DB2 iSeries Source, Informix Source | |
Jira issue: RECOB-3532 | Description: When updating a Pool table or when changing a column, a default Before Image value would be replicated as an After Image value. |
Salesforce case: N/A | |
Type: Issue | |
Component/Process: SAP Application (DB) | |
Jira issue: RECOB-3547 | Description: When parallel transactions were executed with a different CORRELATION ID, the AR_H_USER header value for some of the data events would sometimes contain the CORRELATION ID of one of the parallel transactions. |
Salesforce case: 2240572 | |
Type: Issue | |
Component/Process: IBM DB2 for z/OS Source | |
Jira issue: RECOB-3525 | Description: When capturing GAP events, the following error would occur: Error "Operation 'send BEGIN' and 'send COMMIT' exception message: hexBinary needs to be even-length" |
Salesforce case: 2237092 | |
Type: Issue | |
Component/Process: Salesforce Source |
Jira issue: RECOB-3541 | Description: The warning message "Partial INSERT event is ignored" would be reported when a table contained a LONG column and an UPDATE operation was performed. The warning was reported only when ALTER TABLE was used to add columns to the table. In such a case, the added columns would not be updated. |
Salesforce case: 2229387 |
|
Type: Issue | |
Component/Process: Oracle Source - Replicate Log Reader | |
Jira issue: RECOB-3523 | Description: Added support for the AWS GovCloud (US-East) and AWS GovCloud (US-West) regions. |
Salesforce case: 2233793 | |
Type: Enhancement | |
Component/Process: Databricks on AWS | |
Jira issue: RECOB-3246 | Description: When Full Load failed with a recoverable error, the Salesforce bulk job would remain open. |
Salesforce case: 2179781 | |
Type: Issue | |
Component/Process: Salesforce Source | |
Jira issue: RECOB-3438 | Description: Replicate will now identify itself via the User-Agent property when connecting to all Databricks endpoints. |
Salesforce case: N/A | |
Type: Enhancement | |
Component/Process: Databricks endpoints | |
Jira issue: RECOB-3305 | Description: When using old SAP versions, SAP Extractor would fail with the following error: "The field 'SRSC_C_PROGCLASS_SEGM' is unknown". |
Salesforce case: 2203763 | |
Type: Issue | |
Component/Process: SAP Extractor | |
Jira issue: RECOB-3371 | Description: After removing a table with a DDL change from a task, stopping the task, adding the table back to the task, and then resuming the task, the table would still not be replicated. |
Salesforce case: 2187829 | |
Type: Issue | |
Component/Process: Java SDK Based Endpoints | |
Jira issue: RECOB-3450 | Description: The Full Load passthru filter would create an incorrect statement when the client column was nested in the SAP table. |
Salesforce case: 2233328 | |
Type: Issue | |
Component/Process: SAP Application Source (DB) |
|
Jira issue: RECOB-3372 | Description: The exposed size of character columns would be three times larger than the actual size. |
Salesforce case: 2184987 | |
Type: Issue | |
Component/Process: SAP Application Source (DB) | |
Jira issue: RECOB-3339 | Description: After capturing a DDL, Apply Changes or Store Changes tasks would fail to capture any more changes when the number of pages in the database exceeded 2147483647. |
Salesforce case: 2228237 | |
Type: Issue | |
Component/Process: Sybase ASE Source | |
Jira issue: RECOB-2873 | Description: After a table's metadata had been altered, Replicate would make redundant calls to retrieve other tables' metadata, resulting in degraded performance and latency. |
Salesforce case: 2196532 | |
Type: Issue | |
Component/Process: Microsoft SQL Server Source | |
Jira issue: RECOB-3058 | Description: When the endpoint's Replicate has file-level access to the backup log files option was enabled, Replicate would fail to decompress the transaction log. |
Salesforce case: 2149163 | |
Type: Issue | |
Component/Process: Microsoft SQL Server Source | |
Jira issue: RECOB-3269 | Description: When capturing changes from wide tables (tables with a large number of columns), the data of all target columns except for the Primary Key /Unique Index segment would be NULL. |
Salesforce case: N/A | |
Type: Issue | |
Component/Process: Microsoft SQL Server Source |
Jira issue: RECOB-3427 Salesforce case: 2233328 | Description: The Full Load passthru filter would create an incorrect statement when the client column was nested in the SAP table. |
Type: Issue |
|
Component/Process: SAP Application (DB) | |
Jira issue: RECOB-3410 | Description: An OCI timeout would occur when execution of a statement exceeded 10 seconds. |
Salesforce case: 2227982 | |
Type: Issue | |
Component/Process: Oracle Target | |
Jira issue: RECOB-3385 | Description: DELETE and UPDATE operations would not be replicated to the target when the WHERE clause contained a UNISTR function and the internal parameter useUnistrForWideChars was set. |
Salesforce case: 2231710 | |
Type: Issue | |
Component/Process: Oracle Target | |
Jira issue: RECOB-3182 | Description: "Unable to stop HTTP transport" messages would be reported every few hours when the logging was set to "Error". As this is an EMP (Enterprise Messaging Platform) issue and Replicate successfully reconnects immediately after the error, the message will now only be reported when the logging is set to Trace/Verbose. |
Salesforce case: 2206758 | |
Type: Enhancement Component/Process: Salesforce Source | |
Jira issue: RECOB-3237 | Description: Excessive memory consumption would sometimes be encountered in the case of repeated connection attempts. |
Salesforce case: 2180466 | |
Type: Issue | |
Component/Process: Replicate Engine | |
Jira issue: RECOB-3125 | Description: Change capture would sometimes result in NULL value being replicated for LOB columns instead of the actual data. |
Salesforce case: 2209255 | |
Type: Issue | |
Component/Process: Amazon RDS for SQL Server Source | |
Jira issue: RECOB-3227 |
Description: The SAP extractor delta job would fail when executed by Replicate. |
Salesforce case: 2213864 | |
Type: Issue | |
Component/Process: SAP Extractor Source | |
Jira issue: RECOB-3230 | Description: Added support for the $AR_H_PROGRAM_NAME custom header column. The column contains the name of the iSeries program that changed the table data. |
Salesforce case: N/A | |
Type: Enhancement | |
Component/Process: IBM for DB2 iSeries Source | |
Jira issue: RECOB-3198 | Description: Added support for the AWS GovCloud (US-East) region. |
Salesforce case: 2219130 | |
Type: Enhancement | |
Component/Process: Amazon Redshift and Amazon S3 Targets | |
Jira issue: RECOB-3082 | Description: If the IBM DB2 for iSeries endpoint was configured to use a UDTF and the journal events included CREATE TABLE (CT) entries, some changes would not be captured. |
Salesforce case: 2149866 | |
Type: Issue | |
Component/Process: IBM DB2 for iSeries Source | |
Jira issue: RECOB-3224 | Description: Capturing changes from tables with the BIGINT data type would fail. |
Salesforce case: 2207371 | |
Type: Issue | |
Component/Process: IBM Informix Source | |
Jira issue: RECOB-3156 | Description: The following Redshift error would be wrongly identified by Replicate as a data error, which would cause the task to switch to transactional apply mode: "error occurred while trying to execute a query: [SQLState 00000] SSL SYSCALL error: Connection timed out" |
Salesforce case: 02217068 | |
Type: Issue | |
Component/Process: Amazon Redshift Target | |
Jira issue: RECOB-3191 | Description: Missing Active Directory read permissions on the Computers container would sometimes cause the installation to fail. The issue was resolved with a CLI parameter that prevents PrincipalContext from attempting to access the Computers container. |
Salesforce case: N/A | |
Type: Enhancement | |
Component/Process: Replicate Installation | |
Jira issue: RECOB-3210 |
Description: When replicating to/from unsupported PostgreSQL versions, a warning would be returned instead of an error. |
Salesforce case: N/A | |
Type: Issue | |
Component/Process: PostgreSQL Source/Target |
Jira issue:RECOB-3006 | Description: Replicate Server would crash after upgrade due to a compatibility issue between the Scheduler and the automatic log cleanup option. |
Salesforce case: N/A | |
Type: Issue | |
Component/Process: Upgrade | |
Jira issue: RECOB-3183 | Description: Full Load would sometimes fail with a timeout error. |
Salesforce case: 2211434 | |
Type: Issue | |
Component/Process: Salesforce Source | |
Jira issue: RECOB-3181 | Description: After upgrade, the following error would sometimes be encountered: Buffering capacity 1048576 exceeded |
Salesforce case: 2217080 | |
Type: Issue | |
Component/Process: Salesforce Source | |
Jira issue: RECOB-3017 | Description: Added support for Databricks on Google Cloud as a target endpoint. |
Salesforce case: N/A | |
Type: Enhancement | |
Component/Process: Databricks on Google Cloud endpoint | |
Jira issue: RECOB-3038 | Description: Extractors with names starting with a slash sign (e.g. /CRMBW/ACT_CAT_TXT) could not be activated. |
Salesforce case: 2203757 | |
Type: Issue | |
Component/Process: SAP Extractor Source | |
Jira issue: RECOB-3151 | Description: Added an option to select 15 Minutes from the Partition every drop-down list. |
Salesforce case: N/A |
|
Type: Enhancement | |
Component/Process: UI - Change Data Partitioning | |
Jira issue: RECOB-2737 | Description: The password replace functionality would not work as expected. |
Salesforce case: 2190539 | |
Type: Issue | |
Component/Process: Oracle Source, Microsoft SQL Server Source | |
Jira issue: RECOB-3084 | Description: Tasks would fail to work with a table containing computed columns. |
Salesforce case: 2204554 | |
Type: Issue | |
Component/Process: Microsoft SQL Server Source | |
Jira issue: RECOB-3048 | Description: When the internal parameter to ignore case was set in Snowflake, Replicate would not be able to retrieve the Primary Key information. This resulted in all DML operations being applied one-by-one. |
Salesforce case: 2201519 | |
Type: Issue | |
Component/Process: Snowflake on Azure Target | |
Jira issue: RECOB-2631 | Description: Changing a transformation on a SAP table and resuming the task would impact task performance. |
Salesforce case: 2146786 | |
Type: Issue | |
Component/Process: SAP Application (DB) Source | |
Jira issue: RECOB-3080 | Description: Cluster events would not be generated for a cluster object when the next change was performed on an object in another cluster, in the same transaction. |
Salesforce case: 2206730 | |
Type: Issue | |
Component/Process: SAP Application (DB) Source | |
Jira issue: RECOB-2978 |
Description: In rare scenarios when encountering a non-standard UPDATE operation, the task would stop abnormally (without any errors). |
Salesforce case: 2185703 | |
Type: Issue | |
Component/Process: Microsoft SQL Server Source | |
Jira issue: RECOB-2735 | Description: Performance issues would be encountered when the backend database tables had a large number of version definitions, which would be inserted into the dynamic metadata. After installing this patch release, it is strongly recommended to continue SAP Application (DB) Source tasks using the "Tables are already loaded. Start processing changes from" Run option, thereby cleaning the dynamic metadata. |
Salesforce case: 2193906 | |
Type: Issue | |
Component/Process: SAP Application (DB) Source | |
Jira issue: RECOB-2980 | Description: When resuming a task from the "Source change position", the task would start from the subsequent interval instead of from the specified interval. |
Salesforce case: 2160944 | |
Type: Issue | |
Component/Process: SAP HANA Source - Trigger-based CDC | |
Jira issue: RECOB-3029 | Description: Partial DELETE operations would be performed on cluster tables without converting primary key column values from the backend database data type to the corresponding SAP data type. When using SAP Application with IBM DB2 for iSeries, all cluster table primary key columns are defined as UTF-16. This would result in the DELETE operation not being performed on the target. |
Salesforce case: 2131345 | |
Type: Issue | |
Component/Process: SAP Application Source | |
Jira issue: RECOB-2856 | Description: Apply Log debug messages were changed from trace to verbose severity. |
Salesforce case: NA | |
Type: Issue | |
Component/Process: Google Cloud BigQuery Target | |
Jira issue: RECOB-2911 | Description: Queries for retrieving table metadata would sometimes take a long time when the schema contained a large number of tables. The issue was resolved by optimizing the query. The modified query can be enabled using an internal parameter. |
Salesforce case: 2199568 | |
Type: Enhancement | |
Component/Process: Microsoft SQL Server Source | |
Jira issue: RECOB-2826 |
Description: Replicate would create future partitions (header__timestamp) after a server restart. |
Salesforce case: 2188762 | |
Type: Issue | |
Component/Process: Microsoft SQL Server Source | |
Jira issue: RECOB-2958 | Description: Uninstalling R4SAP does not remove all installed SAP objects. |
Salesforce case: 2203870 | |
Type: Issue | |
Component/Process: SAP Application (DB) Source, SAP Application Source | |
Jira issue: RECOB-2881 | Description: Changes would not be captured when the event page number was bigger than 2147483647. |
Salesforce case: 2200644 | |
Type: Issue | |
Component/Process: Sybase ASE Source | |
Jira issue: RECOB-2909 | Description: In very rare situations, a "Failed to build 'where' statement" error would be encountered when reading the redo logs. |
Salesforce case: 2189154 | |
Type: Issue | |
Component/Process: Oracle Source - Replicate Log Reader | |
Jira issue: RECOB-2654 | Description: Sorter warning messages would flood the task log, for example (excerpt): The second record ID is 1971294 at stream position '001798bb:0010b4a0:0074'. The first record will be discarded (sorter_transaction.c:1365) This is a test patch that extends the verbose logging to assist in troubleshooting the issue. |
Salesforce case: 2152705 | |
Type: Issue | |
Component/Process: Microsoft SQL Server Source | |
Jira issue: RECOB-2801 | Description: When encountering a connection issue on startup, the task would fail with a fatal error, instead of attempting to recover. |
Salesforce case: 2164113 | |
Type: Issue | |
Component/Process: IBM DB2 for z/OS Source | |
Jira issue: RECOB-2744 | Description: Apply Changes or Store Changes tasks would fail to start (either from the beginning or from a timestamp) when the number of pages in the database exceeded 2147483647. |
Salesforce case: 2192758 | |
Type: Issue | |
Component/Process: Sybase ASE Source | |
Jira issue: RECOB-2832 |
Description: The task would fail when trying to resume from a timestamp that preceded the first event in the log. Now, when a timestamp that precedes the first event in the log is specifier, the task will resume from the first event in the log. |
Salesforce case: 2196747 | |
Type: Issue | |
Component/Process: Sybase ASE Source | |
Jira issue: RECOB-2322 | Description: When a connection error occurred and Replicate recovered the task automatically, the reported number of records replicated during Full Load would sometimes differ from the actual number. |
Salesforce case: 2148578 | |
Type: Issue | |
Component/Process: Google Cloud BigQuery Target | |
Jira issue: RECOB-2672 | Description: The task would fail if a column of type ACCP, DATS, or TIMS contained an invalid value. Now the task will not fail. Instead a placeholder with the default value will replace the invalid value: For DATS and ACCP: 1970-01-01 For TIMS: 00:00:00 |
Salesforce case: 2188728 | |
Type: Issue | |
Component/Process: SAP Extractor Source | |
Jira issue: RECOB-2701 | Description: All Primary Key columns in Pool tables would be exposed by the endpoint as nullable, resulting in failure to create the Primary Key on some targets. |
Salesforce case: 2190057 | |
Type: Issue | |
Component/Process: SAP Application (DB) Source Source | |
Jira issue: RECOB-2767 | Description: When Unique Index columns containing NULL values were located near the end of the table, changes would sometimes not be captured. |
Salesforce case: 2189273 | |
Type: Issue | |
Component/Process: Oracle Source - Replicate Log Reader | |
Jira issue: RECOB-2291 | Description: When capturing a SHRINK SPACE operation on Advanced Compressed tables, a DELETE operation that should have be ignored would be sent to the target. |
Salesforce case: 2158576 | |
Type: Issue | |
Component/Process: Oracle Source - Replicate Log Reader | |
Jira issue: RECOB-2392 |
Description: In rare scenarios, when a bulk delete operation was performed on a source table, incorrect data might have been replicated to the target. |
Salesforce case: 2145519 | |
Type: Issue | |
Component/Process: Oracle Source - Replicate Log Reader | |
Jira issue: RECOB-2427 | Description: In rare scenarios, the task would crash when capturing changes from an Advanced Compression table that contained a LONG_RAW column. |
Salesforce case: 2171158 | |
Type: Issue | |
Component/Process: Oracle Source - Replicate Log Reader | |
Jira issue: RECOB-2639 | Description: In rare scenarios, UPDATE operations would sometimes not be captured after Oracle Split events. |
Salesforce case: 2164535 | |
Type: Issue | |
Component/Process: Oracle Source - Replicate Log Reader | |
Jira issue: RECOB-2284 | Description: When capturing an UPDATE operation on a HCC COLUMN WISE table row, Replicate Log Reader would interpret it as an INSERT resulting in the task stopping unexpectedly. |
Salesforce case: 2156140 | |
Type: Issue | |
Component/Process: Oracle Source - Replicate Log Reader | |
Jira issue: RECOB-2576 | Description: In rare scenarios (involving a transformation being defined for the table), capturing UPDATE operations on wide Advanced Compression tables would cause the task to stop unexpectedly. |
Salesforce case: 2180071 | |
Type: Issue | |
Component/Process: Oracle Source - Replicate Log Reader | |
Jira issue: RECOB-2669 | Description: In rare scenarios (involving a transformation being defined for the table), capturing UPDATE operations on an Advanced Compressed record would cause the task to stop unexpectedly. |
Salesforce case: 2187973 | |
Type: Issue | |
Component/Process: Oracle Source - Replicate Log Reader | |
Jira issue: RECOB-2673 Salesforce case: 2185127 | Description: Replicate would fail to capture changes when the journal library and journal receiver library contained special characters. The issue was resolved using internal property. |
Type: Issue |
|
Component/Process: IBM DB2 for iSeries Source | |
Jira issue: RECOB-2688 | Description: When failover occurred during Full Load, the table would be suspended. Now, Replicate will try and upload the table with the newly updated name node. |
Salesforce case: 2125169 | |
Type: Issue | |
Component/Process: Hadoop Target | |
Jira issue: RECOB-2711 | Description: When a row trailing column(s) was changed to NULL, the NULL value would sometimes not be replicated to the target. |
Salesforce case: 2189593 | |
Type: Issue | |
Component/Process: Oracle Source - Replicate Log Reader |
Jira issue: RECOB-2805 | Description: Added the Qlik Cloud Landing replication profile for use with Qlik Cloud Hybrid Data Delivery |
Salesforce case: N/A | |
Type: Enhancement | |
Component/Process: UI- Adding tasks |
About Qlik
Qlik’s vision is a data-literate world, where everyone can use data and analytics to improve decision-making and solve their most challenging problems. A private SaaS company, Qlik offers an Active Intelligence platform, delivering end-to-end, real-time data integration and analytics cloud solutions to close the gaps between data, insights and action. By transforming data into Active Intelligence, businesses can drive better decisions, improve revenue and profitability, and optimize customer relationships. Qlik does business in more than 100 countries and serves over 50,000 customers around the world.