How does Qlik Replicate convert DB2 commit timestamps to Kafka message payload, and why are we seeing a lag of several hours?
When Qlik Replicate reads change events from the DB2 iSeries journal, each journal entry includes both:
Entry timestamp: when the individual operation was logged; from JOENTTST. Qlik Replicate does not use it.
Commit timestamp: when the transaction was committed; from JOCTIM. This is what Qlik Replicate uses as the payload timestamp field.
Then Qlik Replicate converts DB2 iSeries journal commit timestamps to UTC
Qlik Replicate normalizes all internal event timestamps to UTC before serializing the payload, regardless of the source system’s local timezone.
This guarantees downstream consumers (Kafka / Schema Registry / Avro) have a single consistent time base.
Qlik Replicate then populates the data.timestamp field in the Kafka message payload.
The timestamp field in the Kafka message payload represents the commit timestamp of the transaction as recorded in the DB2i journal in UTC timezone. This does not use the Kafka broker's or source DB2 i system's local timezone.
An offset of several hours here is due to timezone normalization, not replication lag.
Note that some operations (DDL, full-load, or deletes) may omit data.timestamp because no valid source commit time exists. This is also expected behavior.
DDL: Not tied to a commit
Full Load: No CDC commit time yet
Delete: DB2 journals don’t always include a valid commit timestamp for the before-image