Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello,
we are facing an issue when transferring data from Oracle to Kafka with Qlik Replicate.
We are using Avro logical data types. On the Oracle side the timestamp information is in CET timezone, but when we check the timestamp in Kafka, which is a long value due to Unix Epoch Time conversion by Qlik, it is in UTC. Either this is because our Kafka Cluster is running on servers which have the UTC timezone, or Qlik Replicate generally converts it to UTC.
Here is an example:
Oracle timestamp field value: 04/01/2022 11:21:03.087262000 (DB timezone is Zurich/CET)
Kafka timestamp (long) field value: 1641295263087262
When decoding the kafka long value back, we get:
Tue Jan 04 2022 11:21:03 GMT+0000 or:
Tue Jan 04 2022 12:21:03 GMT+0100
The GMT+0100 (=CET) timezone does obviosly not correspond to the original value and has 01:00 hr difference.
Is there an option, where we can tell Qlik Replicate in wich timezone the Oracle DB is in?
Or any other solution how we can ingest the correct timestamp (long) value with Qlik into Kafka?
Thanks for any help on this.
Best Regards,
Andreas
Hi @john_wang ,
we use Oracle Server and Client version 19.11. The following client package was installed on the Qlik Server (Linux):
oracle-instantclient19.11-basic-19.11.0.0.0-1.x86_64.rpm
Sorry, what do you mean with FL?
The replication/CDC works fine so far, we have no issue bringing the data to Kafka.
Regards,
Andreas
Oh sorry, the DBA replied that the DB version is Oracle Database 19.8.1.0.
The Oracle client version is correct as mentioned above 19.11.
Hello @AndreasFenske ,
Thank you for your update.
In the past we ever experienced Oracle c/s versions incompatible issue which cause session timezone was not set correct, the client version is Oracle client 12.1.0. Obviously this is not your scenario.
Regards,
John.
Hello @AndreasFenske ,
We are working with internal team on this topic. Will keep you posted.
Before we get an enhancement or fix, there are 2 WA:
1- turn off option "Use logical data types for specific data types" in endpoint setting, or
2- keep it turn on but manually change the data type from DATETIME(n) to STRING(30) in table transform. In this scenario you will get a STRING which represents the datetime in local time.
Hope this helps.
Regards,
John.