Skip to main content
Announcements
Introducing Qlik Answers: A plug-and-play, Generative AI powered RAG solution. READ ALL ABOUT IT!
cancel
Showing results for 
Search instead for 
Did you mean: 
guilherme-matte
Partner - Creator
Partner - Creator

Replicate issue: ORA-01801: date format is too long for internal buffer

Hello guys!

I'm replicating one Oracle source to an Azure target and I'm getting the following error for 1 of the 4 date columns in the table:

ORA-01801: date format is too long for internal buffer.

I know which column it is, but I'm not sure on how to solve it. I've read the article already on the forum as well:

ORA-01801: date format is too long for internal bu... - Qlik Community - 1806805

I've downloaded all the data but there seems to be all in order as per my brief analysis. There are some blank fields and ONE future date, which I do not think could be the root cause of it. 

One interesting point is that part of the table is replicated to Azure before the task fails, and if I do a SELECT COUNT in the Azure table it has exactly 850.000 records. This number seems to be too exactly to be just one random row with error...

Could there be any interaction with Replicate and Oracle regarding the number of rows? Any ideas regarding possible causes for this?

Kind regards! 

 

Labels (2)
1 Solution

Accepted Solutions
john_wang
Support
Support

Hello @guilherme-matte ,

Thanks for reaching out.

Please try this option:

  1. Open Oracle source endpoint
  2. Goto Advanced tab
  3. Add an Internal Parameter

             Parameter name: bindDateAsBinary

             Enable the radio and save the setting, a sample:

john_wang_0-1680593634699.png

Rerun the task and confirm if the problem solved and if data replicated to target side correctly.

Hope this helps.

Regards,

John.

 

 

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!

View solution in original post

7 Replies
john_wang
Support
Support

Hello @guilherme-matte ,

Thanks for reaching out.

Please try this option:

  1. Open Oracle source endpoint
  2. Goto Advanced tab
  3. Add an Internal Parameter

             Parameter name: bindDateAsBinary

             Enable the radio and save the setting, a sample:

john_wang_0-1680593634699.png

Rerun the task and confirm if the problem solved and if data replicated to target side correctly.

Hope this helps.

Regards,

John.

 

 

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
Heinvandenheuvel
Specialist III
Specialist III

I hope that @john_wang 's suggestion of using bindDateAsBinary works out. Please to let us know.

At the same time I suspect you, and others, would also want to go deeper and understand what specific datum caused this.

>>>  I've read the article already 

Good to see you did the research as one does. The article is not very helpful though. 'Remove long literals from the date format string' - what does that mean? I for one need an example! One Example is using TO_CHAR as per https://stackoverflow.com/questions/55767980/how-to-fix-ora-01801-date-format-is-too-long-for-intern... - But it still raises the question why did it work 85000 times and then failed. And was Replicate using TO_CHAR, or was there maybe a special NLS setting?

So what _does_ the input date look like? data type used? Show an example perhaps even a DUMP example.

>>>  exactly 850.000 records. This number seems to be too exactly 

That number is a multiple of 10,000 which is the default commit rate. In the UI:  Task Settings - Full Load Settings - Full Load Tuning - Commit rate during full load:  In the task JSON:  "task_settings": { "target_settings": { "max_transaction_size": ....

If you do want to get down to the bottom of this, then what you may want to do is to reduce down to 100 and try again. When it fails you'll know down to 100 rows where the issue is, just select the id + dates for the next 100 rows beyond the last row on source (or trust the ordinal number and select based on that.). Next - study or refine further. 

I'm kinda surprised this was a source error suggesting that just the SELECT <columns> FROM  action failed on the Oracle side. You may want to study (LOGGING - SOURCE_UNLOAD - TRACE (verbose?) ) that select with and without that internal parameter and possibly repeat outside Replicate in SQLplus or SQLdeveloper allthough most tools would not use binds the way Replicate does.

Good luck, and Let us know?

 

guilherme-matte
Partner - Creator
Partner - Creator
Author

Hello guys! 

Thank you all for the inputs, had some busy week so just now was able to try out the solutions.

Using the internal parameter "bindDateAsBinary" solved the issue, thanks @john_wang !

Im sending a dump of the column that was causing the issue. I do not have direct access to the Oracle DB, but i asked for the data to check inconsistencies. The SEQNO column is there just for indexation, but the error was with the datetime column (sorry if this was not what you meant by dump).

I will try to get a better understanding of what was happening using the suggestions given by you @Heinvandenheuvel.

Thanks guys!

 

 

 

guilherme-matte
Partner - Creator
Partner - Creator
Author

Hey guys, I'm also sending both logs from (Verbose) with and without the internal parameter attached.

guilhermematte_0-1681188071051.png

I've reduced the commit rate to 100 and the number of rows replicated to the target when from 850.000 to 848.100.

Kind Regards,

 

Heinvandenheuvel
Specialist III
Specialist III

The log is useful - It shows "Orig Db Type 12" Which in turn tells us from  https://docs.oracle.com/database/121/SQLRF/sql_elements001.htm#SQLRF30020

"Valid date range from January 1, 4712 BC, to December 31, 9999 AD. The default format is determined explicitly by the NLS_DATE_FORMAT parameter or implicitly by the NLS_TERRITORY parameter. The size is fixed at 7 bytes. This data type contains the datetime fields YEARMONTHDAYHOURMINUTE, and SECOND. It does not have fractional seconds or a time zone."

The attached 'dump' is less than useful.

- Is that SEQNO the PK? The dump is NOT in order. Row 5 goes from 7100048395 to 7100024261 (and many more)

- You did great 'drilling down' to 100 of rows for the trouble zone ( 848.100 -  848.200 ) zo just show us those 100 rows, not a million! We have coffee getting cold. Also, check the MAX PK on target after failed load. That came down also right? Next select by PK range for that max until max + 100, do not use the ordinal number unless it is a subset from

the sorted source.

- That date format is the HORRIBLE Oracle default format, please use YYYY-MM-DD. What is that row "7110008725,27/OCT/31" ? 1931 or 2031? It feels like event (birth?) dates - so that's probably wrong as is "7110685353,17/MAR/24"

- With DUMP, I meant the Oracle DUMP function. Here is an example

 

SQL> column dump format a40;
SQL> select first_time, to_char(first_time, 'YYYY-MM-DD') "yyyy-mm-dd", dump(first_time) "dump" from v$log where rownum < 2;

FIRST_TIM yyyy-mm-dd dump
--------- ---------- ----------------------------------------
09-APR-23 2023-04-09 Typ=13 Len=8: 231,7,4,9,18,0,16,0

 

Note that in this example the "Len=" value is 8 and not 7. Type 13 is not a part of the published 3GL interfaces for Oracle and is used for date calculations mainly within PL/SQL operations

- you may just want to save as .TXT instead of .CSV as too many spreadsheet tools, notably Excel, try to 'help' distorting the data in the process.

Cheers,

Hein.

 

NakulanR
Partner - Contributor III
Partner - Contributor III

Hi Qlik Support,

 

We are seeing the same ORA-01801 error appear in Replicate. Before implementing the bindDateAsBinary internal parameter, would it be possible for you to provide a summary of what the parameter achieves? And in turn, does enabling the parameter have any effect on the data flowing to the target endpoint? e.g. Would timestamp values lose precision?

Unfortunately we are unable to recreate the issue in our environment, and would like to know what the parameter does before we recommend it to the customer. Any insight is greatly appreciated.

 

Regards,

Nak

Heinvandenheuvel
Specialist III
Specialist III

Basically that internal parameter tells the Oracle source to perform the date conversion internally, versus Replicate taking text and interpreting it,  and sometime providing too small a buffer for that text.

There must be a reason for the buffer being not big enough. I suspect that was due to a funky time zone, but the OP never drilled down to the root cause. Maybe you can, on the Oracle side by selecting the suspect date column with your NLS settings and perhaps adding a DUMP for the gory details.

Personally 'date as binary' feels like the desirable method for dates always.

Hein.