Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello Team
Recently, after adding a second logstream task, we have been receiving the following error:
*Task 'LogStream_Frag_Staging' was suspended due to 6 successive unexpected failures*
And both staging and replicating tasks started to present this same issue, criticaly failing after 6 restart attempt.
Nothing is being written to the logs, even on verbose, the only notification/log is this on the side of the monitor screen:
On the logs, this is the last recorded section:
Nothing else is given to help us finding the source of the issue. We are replicating from on prem SQL to Azure SQL db.
The replicate version is: May 2023 (2023.5.0.322)
We have about 15 tasks running, where two are log stream stagings that streams to two different databases.
Any ideas on how to get more info or how to solve it? Could this be a memory related issue? I noticed this started once I did a new full load of some tables. Not exactly sure how this could be related though...
Kind Regards!
Hello @guilherme-matte
Kindly check how much RAM occupied and i would suggest try increasing the server's RAM memory.
AT Server Level-->Resource Control
High Memory Utilization Threshold
Report when system memory utilization reaches(%):80 [Default] you can increase upto 90
Critical Memory Utilization Threshold
Report and start stopping tasks when system memory utilization reaches (%): 90 [Default] you can increase upt 95
Again if the System memory reaches threshold, the task starts to crash so better consider increasing the System Memory (RAM)
Regards,
Suresh
Hello @guilherme-matte
Looks Like the Task is crashing.
Please do check whether you've enough disk place available, if not kindly consider adding more memory into disk.
And also enable SOURCE_CAPTURE and TARGET_APPLY set to VERBOSE will give information on why task is crashing.
If it is windows means kindly check windows event logs for information at the time of crash.
Regards,
Suresh
Hello @sureshkumar !
I've started again the LogStream and now other tasks starting to fail.
This time I had a log though, the following errors popped up:
and this one:
{
Stream component 'st_0_AZURE- db data_lake' terminated
Stream component failed at subtask 0, component st_0_AZURE- db data_lake
Error executing command
Failed to send table 'GENTRACK.ACCOUNTS' (1) events to changes table
Failed to get statement, func: insert row handler
Failed to prepare statement 'INSERT INTO [attrep_changes247D481DC488200A]([seq],[col1],[col2],[col3],[col4],[col5],[col6],[col7],[col8],[col9],[col10],[col11],[col12],[col13],[col14],[col15],[col16],[col17],[col18],[col19],[col20],[col21],[col22],[col23],[col24],[col25],[col26],[col27],[col28],[col29],[col30],[col31],[col32],[col33],[col34],[col35],[col36],[col37],[col38],[col39],[col40],[col41],[col42],[col43],[col44],[col45],[col46],[col47],[col48],[col49],[col50],[col51],[col52],[col53],[col54],[col55],[col56],[col57],[col58],[col59],[col60],[col61],[col62],[col63],[col64],[col65],[col66],[col67],[col68],[col69],[col70],[col71],[col72],[col73],[col74],[col75],[col76],[col77],[col78],[col79],[col80],[col81],[col82],[col83],[col84],[col85],[col86],[col87],[seg1],[seg2]) values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)'
Failed to allocate array for parameter 'Param#074' in statement 'INSERT INTO [attrep_changes247D481DC488200A]([seq],[col1],[col2],[col3],[col4],[col5],[col6],[col7],[col8],[col9],[col10],[col11],[col12],[col13],[col14],[col15],[col16],[col17],[col18],[col19],[col20],[col21],[col22],[col23],[col24],[col25],[col26],[col27],[col28],[col29],[col30],[col31],[col32],[col33],[col34],[col35],[col36],[col37],[col38],[col39],[col40],[col41],[col42],[col43],[col44],[col45],[col46],[col47],[col48],[col49],[col50],[col51],[col52],[col53],[col54],[col55],[col56],[col57],[col58],[col59],[col60],[col61],[col62],[col63],[col64],[col65],[col66],[col67],[col68],[col69],[col70],[col71],[col72],[col73],[col74],[col75],[col76],[col77],[col78],[col79],[col80],[col81],[col82],[col83],[col84],[col85],[col86],[col87],[seg1],[seg2]) values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)' (size: 82000 bytes)
Not enough memory resources are available to process this command. (apr status = 720008)
In this case, is this regarding Replicate Server available RAM or still disk space? Apparently, the disk space is not an issue at the moment, since there is plenty available.
EDIT:
Also, started receiving alerts regarding the System Memory utilisation threshold now... In these cases, which are the recommended correction steps?
Cheers!
Hello @guilherme-matte
Kindly check how much RAM occupied and i would suggest try increasing the server's RAM memory.
AT Server Level-->Resource Control
High Memory Utilization Threshold
Report when system memory utilization reaches(%):80 [Default] you can increase upto 90
Critical Memory Utilization Threshold
Report and start stopping tasks when system memory utilization reaches (%): 90 [Default] you can increase upt 95
Again if the System memory reaches threshold, the task starts to crash so better consider increasing the System Memory (RAM)
Regards,
Suresh
Hello team,
To add more on to TSE comments. Such environment related issue (Space, memory consumption) Require analysis of entire setup as a defined task consume memory on the basis of task settings taken into consideration.
We would require info like Memory allocated to server. Memory used by Current task. What kind of Data you are pushing to endpoint and which endpoint.
As per Shares error it indicates allocated memory or Space is issue. However, such issue is handled by PS (professional services)
either you can reach out to them Via your CSE/AM, or you can raise a support case and ask us to initiate Collab to engage PS.
Regards,
Sushil Kumar