Skip to main content

Qlik Replicate

Discussion board for collaboration on Qlik Replicate.

Announcements
Feb 9th, Qlik Product Portfolio Strategy and Roadmap Webinar for Data Integration, last chance to REGISTER
cancel
Showing results for 
Search instead for 
Did you mean: 
fj40wdh
Contributor II
Contributor II

Temporary Change tables

We are replicating from Oracle to Sql Server, Replicate version November 2021 (2021.11.0.349)

We are recieving this error 

00009396: 2023-01-25T14:58:27 [TARGET_APPLY ]I: Net Changes table name for the task is 'attrep_changes02CD0BDAB15C8B2E' (bulk_apply.c:3720)
00009396: 2023-01-25T14:58:27 [TARGET_APPLY ]I: Error in bulk, bulk state: bulk confirmed record id - '0', bulk last record id - '0', confirmed record id - '86857', sorter confirmed record id - '86857' (bulk_apply.c:2468)
00009396: 2023-01-25T14:58:27 [INFRASTRUCTURE ]E: Cannot allocate memory (apr status = 12) [1000104] (at_memory.c:428)
00009396: 2023-01-25T14:58:27 [TARGET_APPLY ]E: Failed to allocate array for parameter 'Param#010' in statement 'INSERT INTO [dbo].[attrep_changes02CD0BDAB15C8B2E]([seq],[col1],[col2],[col3],[col4],[col5],[col6],[col7],[col8],[col9],[col10],[col11],[col12],[col13],[col14],[col15],[col16],[col17],[col18],[col19],[col20],[col21],[col22],[col23],[col24],[col25],[col26],[col27],[col28],[col29],[col30],[col31],[col32],[col33],[col34],[col35],[col36],[col37],[col38],[col39],[col40],[col41],[col42],[col43],[col44],[col45],[col46],[col47],[col48],[col49],[col50],[col51],[col52],[col53],[col54],[col55],[col56],[col57],[col58],[col59],[col60],[col61],[col62],[col63],[col64],[col65],[col66],[col67],[col68],[col69],[col70],[col71],[col72],[col73],[col74],[col75],[col76],[col77],[col78],[col79],[col80],[col81],[col82],[col83],[col84],[col85],[col86],[col87],[col88],[col89],[col90],[col91],[col92],[col93],[col94],[col95],[col96],[col97],[col98],[col99],[col100],[col101],[col102],[col103],[col104],[col105],[col106],[col107],[col108],[col109],[col110],[col111],[col112],[col113],[col114],[col115],[col116],[col117],[col118],[col119],[col120],[col121],[col122],[col123],[col124],[col125],[col126],[col127],[col128],[col129],[col130],[col131],[col132],[col133],[col134],[col135],[col136],[col137],[col138],[col139],[col140],[col141],[col142],[col143],[col144],[col145],[col146],[col147],[col148],[col149],[col150],[col151],[col152],[col153],[col154],[col155],[col156],[col157],[col158],[col159],[col160],[col161],[col162],[col163],[col164],[col165],[col166],[col167],[col168],[col169],[col170],[col171],[col172],[col173],[col174],[col175],[col176],[seg1],[seg2],[seg3],[seg4]) values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)' (size: 2048002000 bytes) [1020100] (ar_odbc_stmt.c:4399)

It indicates a memory problem. What could be causing this? 

If it is a data problem how do find out the data key columns so I can investigate?

 

Labels (1)
1 Solution

Accepted Solutions
Heinvandenheuvel
Specialist
Specialist

>>>  00009396: 2023-01-25T14:58:27 [INFRASTRUCTURE ]E: Cannot allocate memory (apr status = 12) [1000104] (at_memory.c:428)

This is always based on VIRTUAL memory, on memory pools and such. 

@fj40wdh >> Going to have the memory extended.

That's 99.9% sure to be a waste of time and effort. It's not about the amount of physical memory in the server.

[EDIT: I only just noticed the '2,048,002,000 bytes' in the error message.  That's over 2 GIGA BYTES. How large are the LOB settings? Something is 'confused' maybe the task designer, maybe the code.

Let's hear some basic troubleshooting information.

  • Did it ever work?
    • Yes? What changed? (could be time and over time changes to a particular source table)
    • No? Drill down to the table causing the issue, isolate and try again. Still an issue?
  • Does it work for a while?
    • Use REPCTL GETSNAPSHOT a few times, just after starting, while running and as close as possible to the time when an error is expected.
      • The output from getsnapshot is a file <task>_pools_report.txt
      • That output may be hard to interpret. Attached is a PERL script to help focus on potential trouble zones. Note, It'll still be difficult to interpret the output. Give it a few minutes.
    • Is it happening when that table with 175 columns has its first changes? 
      • Can you identify that table? 
      • Can you isolate that table?
      • What is special about the table? -How 'wide' is it in characters? Many transformations?
      • Can you - just for testing - drop a few or half of the extra wide columns for that table?
      • Can you share the source definition for the table, maybe with some column name obfuscation replacing them with C1, C2 or so?
  • Share a (redacted) task json to see if 'odd' settings were chosen for example for "Commit rate during full load". Maybe drop the table-list in favor of a line saying 'nnn - included tables expunged for brevety'

hth,

Hein

 

 

 

View solution in original post

3 Replies
SwathiPulagam
Support
Support

Hi @fj40wdh ,

 

What is the Memory utilization on the replicate server and SQL server at the time of the issue?

You might need to increase the memory depending on the utilization.

 

Thanks,

Swathi

fj40wdh
Contributor II
Contributor II
Author

The SQL database/server look OK.

Appears it is a problem with the app server memory.  Going to have the memory extended. See if that helps.

Thanks.

Heinvandenheuvel
Specialist
Specialist

>>>  00009396: 2023-01-25T14:58:27 [INFRASTRUCTURE ]E: Cannot allocate memory (apr status = 12) [1000104] (at_memory.c:428)

This is always based on VIRTUAL memory, on memory pools and such. 

@fj40wdh >> Going to have the memory extended.

That's 99.9% sure to be a waste of time and effort. It's not about the amount of physical memory in the server.

[EDIT: I only just noticed the '2,048,002,000 bytes' in the error message.  That's over 2 GIGA BYTES. How large are the LOB settings? Something is 'confused' maybe the task designer, maybe the code.

Let's hear some basic troubleshooting information.

  • Did it ever work?
    • Yes? What changed? (could be time and over time changes to a particular source table)
    • No? Drill down to the table causing the issue, isolate and try again. Still an issue?
  • Does it work for a while?
    • Use REPCTL GETSNAPSHOT a few times, just after starting, while running and as close as possible to the time when an error is expected.
      • The output from getsnapshot is a file <task>_pools_report.txt
      • That output may be hard to interpret. Attached is a PERL script to help focus on potential trouble zones. Note, It'll still be difficult to interpret the output. Give it a few minutes.
    • Is it happening when that table with 175 columns has its first changes? 
      • Can you identify that table? 
      • Can you isolate that table?
      • What is special about the table? -How 'wide' is it in characters? Many transformations?
      • Can you - just for testing - drop a few or half of the extra wide columns for that table?
      • Can you share the source definition for the table, maybe with some column name obfuscation replacing them with C1, C2 or so?
  • Share a (redacted) task json to see if 'odd' settings were chosen for example for "Commit rate during full load". Maybe drop the table-list in favor of a line saying 'nnn - included tables expunged for brevety'

hth,

Hein