Skip to main content
Woohoo! Qlik Community has won “Best in Class Community” in the 2024 Khoros Kudos awards!
Announcements
Nov. 20th, Qlik Insider - Lakehouses: Driving the Future of Data & AI - PICK A SESSION
cancel
Showing results for 
Search instead for 
Did you mean: 
stucas
Contributor
Contributor

How do I preserve a timestamp(6): Oracle to PostgresSQL

Scenario:

I have an incredibly simple flow: tDbOracleInput --> tMap --> tDbPostgresqlOutput

TLDR:

How do I get an Oracle timestamp(6) into PostgresSQL timestamp(6) without losing any precision?

Issue 

Reading from source table: A mask of "yyyy-MM-dd HH:mm:ss.SSS" only transfers the first three digits. A timestamp mask of "yyyy-MM-dd HH:mm:ss.SSSSSS" causes issues in that only the first three digits are parsed and appended to '000'; thus "2023-11-12 15:45:00.123456" becomes either "2023-11-12 15:45:00.123" or "2023-11-12 15:45:00.000123"

I attempted to ameliorate this by using the Oracle to_char() function when selecting the field: i.e. to_char(GG_TSTAMP, 'YYYY-MM-DD HH24:MI:SS.FF6' ) which brings back the string form correctly. But this leads to the second problem in that the tMap also uses a java.util.Date in it's output schema and again, I'm losing digits. I assuming this is happening because internally because java.util.Date does not support the formatting of greater then milliseconds and that maybe it's not using the Java8 java.time classes?

Within the tMap I have converted the given timestamp string into a Date via the following function so I'm almost convinced that this is Talend and not me!

public static Date fromTimestamp(final String timestamp) 

{

// Note: timezones are not relevant here; need a verbatim copy

  

DateTimeFormatter parser = new DateTimeFormatterBuilder()

  .append(DateTimeFormatter.ISO_LOCAL_DATE)

  .appendLiteral(' ')

  .append(DateTimeFormatter.ISO_LOCAL_TIME)

  .toFormatter(Locale.ENGLISH);

LocalDateTime localDateTime = LocalDateTime.from(parser.parse(timestamp));

Long epoch = ZonedDateTime.of(localDateTime, ZoneId.systemDefault()).toInstant().toEpochMilli();

Date epochDate = java.sql.Timestamp.valueOf(localDateTime);

System.out.println("----------------------------------------------------------------------");

System.out.println("----> Request for Date from: " + timestamp);

System.out.println("    localDateTime: " + localDateTime.format(parser));

System.out.println("      epoch: " + epoch.toString());

System.out.println("      Date: " + epochDate.toString());

System.out.println("    Date instant: " + epochDate.toInstant().toString());

return epochDate ;

}

All ideas greatfully received; this column is a synchronisation timestamp and is an important field for us.

Labels (4)
1 Reply
stucas
Contributor
Contributor
Author

So, I have found a work around, but to me this is something that should not need to be done:

 

1. Read the timestamp from source DB as a string (fomatted as required)

2. Pass this though the tMap (again as a string)

3. In the tDBoutput use the "Additional Columns" (under advanced) to use field replacment, converting the string back into a timestamp

0695b00000tE7OyAAK.pngWhy I think it's a problem

  • Firstly this means that you cannot use the metadata created from reading a database table's defintion - the timestamps (or other columns) need to be a string and that means you're into having to save an almost duplicate version of the table as a "generic schema" - ths makes code confsing and increases maintenance overhead.
  • It seems to me that conversion of a "timestamp" field in a RDBMS should not be intepreted as java.util.Date; it should be using something more akin to java.sql.Timestamp object (derived from Date)
  • If I do chnage the inbound outbound type to "TIMESTAMP" the tDbInput gives a warning "The schema's DB type is not correct for this component".

 

I'm searching for neater "solution" ... but this will get me