Skip to main content
Announcements
Have questions about Qlik Connect? Join us live on April 10th, at 11 AM ET: SIGN UP NOW
cancel
Showing results for 
Search instead for 
Did you mean: 
AndreasFenske
Contributor III
Contributor III

Timezone Conversion - Oracle CET to Kafka (Avro) UTC

Hello,

we are facing an issue when transferring data from Oracle to Kafka with Qlik Replicate.

We are using Avro logical data types. On the Oracle side the timestamp information is in CET timezone, but when we check the timestamp in Kafka, which is a long value due to Unix Epoch Time conversion by Qlik, it is in UTC. Either this is because our Kafka Cluster is running on servers which have the UTC timezone, or Qlik Replicate generally converts it to UTC.

Here is an example:

Oracle timestamp field value: 04/01/2022 11:21:03.087262000  (DB timezone is Zurich/CET)

Kafka timestamp (long) field value: 1641295263087262

When decoding the kafka long value back, we get:

Tue Jan 04 2022 11:21:03 GMT+0000  or:

Tue Jan 04 2022 12:21:03 GMT+0100

The GMT+0100 (=CET) timezone does obviosly not correspond to the original value and has 01:00 hr difference.

Is there an option, where we can tell Qlik Replicate in wich timezone the Oracle DB is in?

Or any other solution how we can ingest the correct timestamp (long) value with Qlik into Kafka?

Thanks for any help on this.

Best Regards,
Andreas

Labels (2)
13 Replies
Steve_Nguyen
Support
Support

for your Oracle source, for this time column, what is the oracle data type?

from Replicate , click on table transformation, what does it show for this column data type ?

Help users find answers! Don't forget to mark a solution that worked for you! If already marked, give it a thumbs up!
AndreasFenske
Contributor III
Contributor III
Author

The source column in Oracle is defined as:

TIMESTAMP(9), Length: 11, Scale: 9, Nullable: No

In Qlik Replicate in the table transformations it is defined as DATETIME(9).

In Kafka, it is defined as logical Avro data type:

{
"default": null,
"name": "CREATED",
"type": [
"null",
{
"logicalType": "timestamp-micros",
"type": "long"
}
]
},

From my understanding, in Oracle there is no direct timezone information connected to a specific timestamp field. However the database has a timezone.

For example this query:

SELECT DBTIMEZONE FROM DUAL;

Brings: +01:00

The Servers behind the Kafka Clusters are running UTC.

So we have in total this setup:

  • Oracle - Zurich timezone (CET)
  • Qlik Replicate Linux Servers - Zurich timezone (CET)
  • Kafka Cluster - UTC

 

Steve_Nguyen
Support
Support

not sure what your Kafka endpoint setting , but have you check this :

If you selected Avro, optionally select the Use logical data types for specific data types check box
to map some of the number-based Qlik Replicate data types to Avro logical data types. When this
option is not selected (the default), all Qlik Replicate data types will be mapped to Avro primitive data
types.
For more information on Qlik Replicate to Avro data type mapping, see
Mapping from Qlik Replicate
Data Types to Avro

https://help.qlik.com/en-US/replicate/November2021/Content/Replicate/Main/Kafka/kafka_setting_genera...

Help users find answers! Don't forget to mark a solution that worked for you! If already marked, give it a thumbs up!
AndreasFenske
Contributor III
Contributor III
Author

The option "Use logical data types for specific data types" is selected. When the option is not selected all fields are String fields and also timestamp fields are not converted to Unix Epoch times in that case. Probably then all timestamp values are 1:1 written as string, but I would have to prove that. However we would like to have these logical Avro data types where timestamps are converted to Unix Epoch times (long values).

Heinvandenheuvel
Specialist II
Specialist II

The problem is that when the timestamp value is retrieved from the source there is no timezone actively associated with it.

One could perhaps argue that is this is sloppy DB designer work. They should have used TIMESTAMP WITH TIME ZONE and all would be well (probably - I did not verify), so let them solve it for Kafka.

Within Replicate you could define a transformation on the column for example: datetime ($CREATED || '+01:00')

Google for 'sqlite datetime function' --- > https://www.sqlite.org/lang_datefunc.html

You can test this easily in the designer with :  datetime (datetime('now','localtime') || '+01:00') ... parse ... test.

The likely problem with that is the hard-coded timezone offset - possibly summertime depended. 

You can use source_lookup with a long time to live for DBTIMEZONE as long as the DB provides the timezone by number, not by name

datetime ($CREATED || source_lookup(10000,'SYS','DUAL','DBTIMEZONE','1+1=2'))

If the source db has a timezone by name, then you would have to 'map' that in the task with an IF-THEN-ELSE transformation or you could provide a private lookup table in the source DB, or just add a timezone column to the target and let them figure it out!

Another thought might be (to avoid going back to the source DB) is to use and IF-THEN-ELSE to map (part of) the Replicate provided variable $AR_V_SOURCE_NAME to a timezone but that's not very robust as source-db-endpoint-names change when going from dev to qa to prod. Still....

You may also want to look at using a transformation to switch to epoch directly: (julianday($CREATED) - 2440587.5)*86400.0

You'd have to add/subtract the right number of second to get the timezone in place but you'd be under full control.

hth,

Hein.

AndreasFenske
Contributor III
Contributor III
Author

Thank you for your detailed answer.

Yes, potentially TIMESTAMP WITH TIME ZONE might solve the problem, however in most companies (I know) this is not used. So mostly we have pure TIMESTAMP fields and then the DB timezone.

Probably I could solve the issue somehow with transformations on field level like the examples you have provided, but in our case we have around 230 tables which would be quite cumbersome to change all timestamp fields for all tables in that way. From my perspective it should work already by default by considering the DB timezone when processing / casting timestamp fields into Unix Epoch Time. Unfortunately I didn't see any option in the endpoint connections for Oracle to configure the DB timezone.

As reference I found this entry in the community:

https://community.qlik.com/t5/Qlik-Replicate/Convert-timezone/td-p/1874853

There is a screenshot of the MySQL Database Endpoint Configuration and there you can set the DB timezone.

So actually when taking a timestamp from any source database and converting into a Unix Epoch Time, the timezone should be considered.

Currently for the replication "Oracle -> Kafka" with logical (Avro) data types Qlik Replicate is always converting the source timestamp value 1:1 into UTC Unix Epoch Time, regardless of the source DB timezone.

Can you confirm that?

Would it be a feature request to change that behavior and consider the DB source timezone when converting to Unix Epoch time?

AndreasFenske
Contributor III
Contributor III
Author

Another idea: could it be related to the created connection?

In the session with the database a timezone information is provided by the client so that the client can receive timestamps in the desired timezone. Our Qlik Server has CET as local time, so when Qlik Replicate creates a connection to the Oracle Database, maybe CET is configured in the connection session.

Could we somehow (e.g. Qlik internal parameter) overwrite this timezone information in the session when Qlik Replicate creates a connection to the database?

Steve_Nguyen
Support
Support

1. Could we somehow (e.g. Qlik internal parameter) overwrite this timezone information:

ans: if Oracle ODBC connection have such parameter then we can use in Replicate, but i did a quick search and do not see search timezone connection string for Oracle ODBC..

2. Would it be a feature request to change that behavior and consider the DB source timezone when converting to Unix Epoch time?

ans : this would be a feature request.

https://community.qlik.com/t5/Ideation/ct-p/qlik-product-insight

Help users find answers! Don't forget to mark a solution that worked for you! If already marked, give it a thumbs up!
john_wang
Support
Support

Hello @AndreasFenske , copy @Heinvandenheuvel , @Steve_Nguyen ,

May I know what's the Oracle server and client version?

If I'm correct, you are running Oracle client 12.1.0?

BTW, the problem occurs on FL only. the CDC works fine. is that correct?

Regards,

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!