Skip to main content
Announcements
UPGRADE ADVISORY for Qlik Replicate 2024.5: Read More
cancel
Showing results for 
Search instead for 
Did you mean: 
gkaur
Contributor III
Contributor III

Store before and after image not working for Postgres data source

Does "Store before and after image" feature works for Postgres data source?

Unfortunately for me it is not , we have made sure the wal_level is  set to logical and the replicate user is also a superuser. Not sure what else is missing ?

Labels (3)
6 Replies
lyka
Support
Support

Hello,

When using using PostgreSQL as a source, Replication of the Before Image is not supported.

https://help.qlik.com/en-US/replicate/May2023/Content/Global_Common/Content/SharedReplicateHDD/Postg...

Thanks

Lyka

gkaur
Contributor III
Contributor III
Author

Oh! No, Does this mean what we cannot convert Physical deleted on Postgres to logical deletes on target side ?

john_wang
Support
Support

Hello @gkaur ,

If you mean the function operation_indicator then it works fine (even with PK is changed). for example the expression : operation_indicator('DELETE', 'UPDATE', 'INSERT')

Hope this helps.

Regards,

John.

 

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
gkaur
Contributor III
Contributor III
Author

No John, I tried this option as well but it still deletes the row from the target table in snowflake because soft delete first triggers a physical delete on the record and then using using attrep_changes table triggers an insert statement, since in postgres before image is not captured, all except primary keys becomes NULL and insert fails as it cannot push NULL to not null columns.

Is there any way to avoid DELETE option all together on the Apply/final table and have it capture in only __ct table ?

I feel I am stuck 😞 

john_wang
Support
Support

Hello @gkaur ,

Thanks for your patience.

There are special limitations in PostgreSQL source endpoints than other relational databases including:

In PostgreSQL the REPLICA IDENTITY can be {DEFAULT | USING INDEX index_name | FULL | NOTHING }, While set it to DEFAULT then PostgreSQL records the old values of the columns of the primary key into the write-ahead log, the other columns values are not recorded. This is why the columns values are set to NULL in Qlik Replicate. Set it to FULL may get all columns old values unfortunately it's not officially supported in current Qlik Replicate versions. Please open Feature Request if necessary.

BTW, how about if you change the applying mode from "Batch optimized apply" to "Transactional Apply"? Certainly in transactional apply mode the performance is not good. 

Correction: please do not try transactional apply mode as it's not supported in Snowflake target endpoint.

Hope this helps.

Regards,

John.

 

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
Heinvandenheuvel
Specialist III
Specialist III

gkuar>> s there any way to avoid DELETE option all together on the Apply/final table and have it capture in only __ct table ?

@gkaur - avoiding deletes is relatively easily done with a FILTER expression:  $AR_H_OPERATION != 'DELETE'

However this will skip the row entirely, both APPLY and STORE. With Logging for transformation set to trace you'll get the  the relatively meaningless/contextless log line: [TRANSFORMATION ]T: Filter expression is false, skip the row 

I suspect your only choice is the above file and an additional Store changes only task for those tables needing a soft delete and a filter passing only delete. Now create an independent VIEW for the main table(s) to not return rows which are in the CT table with operation = D or have a separate process running on the Snowflake target DB  to pick up the D rows and update the target rows to indicate the (soft) delete. The fixup could be daily/hourlly or and just have  __CT table(s) server as external soft delete flags through that view.

fwiw,

Hein