Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Join us in NYC Sept 4th for Qlik's AI Reality Tour! Register Now
cancel
Showing results for 
Search instead for 
Did you mean: 
FrancoHR
Partner - Contributor III
Partner - Contributor III

Getting null on S3 Target

A customer has configured replication tasks using logstream to two destinations: S3 and Amazon Aurora(Replicate May 2021).

The issue occurs only in CDC mode and only with the S3 target.

The problem table is POLICY, which has the POLICY_ID field as PK. This table has Supplemental Logging enabled in PK as the documentation indicates but null values are still being generated.

Any technical answers as to why this problem occurs only with S3 and not with Aurora? It is worth mentioning that the S3 task works with Store Changes mode and the Aurora task with Apply Changes.

What would be the solution to not get the null values? Should supplemental logging be enabled for all columns and not just for the PK?

Attached Logs of the task and the DDL of the table.

 

Regards,

Labels (3)
1 Solution

Accepted Solutions
SwathiPulagam
Support
Support

Hi @FrancoHR ,

 

Since supplemental logging has not been enabled for all the columns in the table, Replicate can only capture the updated values of those columns and the pk value.

For RDBMS Target with apply changes, Replicate will update the modified column in the target table by matching the primary key (PK).

However, for a File target like S3, we have to fill all the columns in the CT table so we will write PK and updated column values in CT table and make all the remaining columns as NULL.

Hope this is clear. Let me know if you have any questions.

 

Thanks,

Swathi

View solution in original post

7 Replies
sureshkumar
Support
Support

Hello Team,

There is a limitation at Oracle Side

"During Change Processing, columns without supplemental logging that are not updated will be inserted as NULL in the Change Table." i.e Supplemental logging for all columns should be enabled.

Limitations and considerations | Qlik Replicate Help

Refer the below article:
Why Oracle ALL Column Supplemental Logging is required.
Oracle ALL Column Supplemental Logging - Qlik Community - 1799267

 

Regards,

Suresh

SachinB
Support
Support

Hello @FrancoHR ,

Thanks for reaching out to us. Just adding more information on @sureshkumar post

Please add the supplemental logging on ALL columns for this particular table on Oracle source, and you will see all the records populated in the __ct table. However, if you have already enabled "'Automatically add supplemental logging" will only turn on supplemental logging for PK columns. If you need all columns to be sent to the redo log, you need to enable supplemental logging for all columns

Below is link on how to enable supplement log:


http://oracle-help.com/oracle-database/types-supplemental-logging-database-level/#:~:text=To%20enabl....

To enable supplemental log data for ALL column

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

Database altered.


To check whether it is enabled or not

SQL> SELECT SUPPLEMENTAL_LOG_DATA_ALL FROM V$DATABASE;

SUP
---
YES

====

Useful community links : 

https://community.qlik.com/t5/Knowledge/Elaboration-on-supplemental-logging-requirements/ta-p/187632...

https://help.qlik.com/en-US/replicate/May2022/Content/Global_Common/Content/SharedReplicateHDD/Oracl...

Regards,

Sachin B

 

 

FrancoHR
Partner - Contributor III
Partner - Contributor III
Author

Hello @SachinB ,

Thanks for the reply.


Could you tell me why this only occurs in S3 and not in Aurora? Could it be the type of replication for the CDC (Store Changes vs Apply Changes)?

Regards,

Heinvandenheuvel
Specialist III
Specialist III

>> Could you tell me why this only occurs in S3 and not in Aurora? Could it be the type of replication for the CDC (Store Changes vs Apply Changes)?

I'm sure this is related to the store vs apply. Surely you also suspect as much. So why not confirm that by flipping the endpoints as a test? Trust but verify.

In the apply changes configuration you are probably seeing the old values from before the update for the the fields which haven't changed and would have null in the store changes task. This is fin as those which changes are updated and those which did not change do not change. Perfect.

Hein.

 

john_wang
Support
Support

Hello @FrancoHR ,

Could you please let us know the final status now, I hope we helped you move it forward.

BTW,

1. Replicate 2021.5 had reached its end of support (EOS) date , please plan to upgrade to supported versions;

2. There is a known limitation in Replicate 2021.5, while the target is S3, the source value SPACE (' ')  is ignored and replicated as NULL value to S3. it's fixed in 2021.11. certainly, the fix was included in higher versions too. Replicate 2022.11 is highly recommended.

Hope this helps.

Regards,

John.

 

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
SwathiPulagam
Support
Support

Hi @FrancoHR ,

 

Since supplemental logging has not been enabled for all the columns in the table, Replicate can only capture the updated values of those columns and the pk value.

For RDBMS Target with apply changes, Replicate will update the modified column in the target table by matching the primary key (PK).

However, for a File target like S3, we have to fill all the columns in the CT table so we will write PK and updated column values in CT table and make all the remaining columns as NULL.

Hope this is clear. Let me know if you have any questions.

 

Thanks,

Swathi

FrancoHR
Partner - Contributor III
Partner - Contributor III
Author


Hi team,


The issue is now clear, in the end it was not a Qlik error but rather the client was misusing the logic to handle the updates.
To solve this problem, either change the supplemental logging at the level of all the columns or they should change their programming logic.


Thanks for the help.