Skip to main content
Announcements
UPGRADE ADVISORY for Qlik Replicate 2024.5: Read More
cancel
Showing results for 
Search instead for 
Did you mean: 
desmondchew
Creator III
Creator III

bdpattunity server experiencing high memory

We have the Attunity server running in Windows server 2019, 32GB RAM memory. We noticed in task manager that Attunity Replicate proceses has chewed up most of the memory. The memory utilization has been holding at peak 99% for almost 10 hours.

There is a task where we attempted to do a full table reload + CDC. It got stuck in full table reload. The progress bar sits in 0%, not moving at all.

Is there a way to limit the memory usage by "Attunity Replicate"? I suppose the OS needs some breathe, room of space to work on.

Thank you.

Des

Labels (1)
2 Solutions

Accepted Solutions
john_wang
Support
Support

Hello Desmond , @desmondchew .

First of all I'd like suggest you to remove the log file immediately as it contains your sensitive information eg host name, port number etc (unless it's dummy information).

Secondly, there are 12 or more tasks and they occupies 20+G memory, the biggest one is 2.4G which looks to me is reasonable yet even I'm not very sure how the task(s) memory are configured. If there are more tasks (which are invisible in your picture) then more memory is needed of course.

Please open support ticket and attach the tasks Diag Packages , support team can help you to check the task settings and suggest you how to config the memory properly.

BTW, if you need running many tasks parallel in a single Replicate Server then you need a more powerful machine, see the User Guide Recommended hardware configuration . 

Hope this helps.

Regards,
John.

 

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!

View solution in original post

Heinvandenheuvel
Specialist III
Specialist III

While 2GB or bigger tasks are not uncommon, some tasks are happy with 200 MB.

IMHO a 32 GB main memory system is not a 'serious' server, I have that on my laptop, but still it could allow for 50+ tasks if they are not 'tuned' to aggressively.

Many factors/settings influence memory usage by a task, but there are three clear and easily accessible knobs under tasks settings. The full-load tuning max tables (default 5) as well as the commit rate (default 10,000 which also influences CDC), and the change processing tuning setting  'force apply' (defaulting to 500MB). Some folks tune those way higher than needed with a single high setting for all tasks. Furthermore these settings, notably the CDC one perhaps can be trimmed down for a DEV/QA box and only opened up for Prod which may have much more memory available. I'd suggest an 'recpctl exportrepository' for all task and search the JSON for a quick overview. Relevant settings are:

	"task_settings":	{
		"full_load_sub_tasks":	42,
		"source_settings":	{
		},
		"target_settings":	{
			"max_transaction_size":	1234,
:
		"common_settings":	{
			"batch_apply_memory_limit":	1234,

 

btw...  if you wonder why the values chosen are  '1234'? That's just a trick to easily find a setting name in the JSON. Pick a 'strange' number and search for it. 

The information you provided only showed the 'end': how the system responded. You need to learn to focus on the beginning: what did you ask the system to do and why were those choices made. 'dunno' is not an acceptable reason. 

Hein.

View solution in original post

3 Replies
john_wang
Support
Support

Hello Desmond , @desmondchew .

First of all I'd like suggest you to remove the log file immediately as it contains your sensitive information eg host name, port number etc (unless it's dummy information).

Secondly, there are 12 or more tasks and they occupies 20+G memory, the biggest one is 2.4G which looks to me is reasonable yet even I'm not very sure how the task(s) memory are configured. If there are more tasks (which are invisible in your picture) then more memory is needed of course.

Please open support ticket and attach the tasks Diag Packages , support team can help you to check the task settings and suggest you how to config the memory properly.

BTW, if you need running many tasks parallel in a single Replicate Server then you need a more powerful machine, see the User Guide Recommended hardware configuration . 

Hope this helps.

Regards,
John.

 

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
desmondchew
Creator III
Creator III
Author

Hi John,

I have already masked out the info in the logfiles. ok thanks looks like we have too many tasks and the server might be unable to cope.

 

Thanks
Desmond

 

Heinvandenheuvel
Specialist III
Specialist III

While 2GB or bigger tasks are not uncommon, some tasks are happy with 200 MB.

IMHO a 32 GB main memory system is not a 'serious' server, I have that on my laptop, but still it could allow for 50+ tasks if they are not 'tuned' to aggressively.

Many factors/settings influence memory usage by a task, but there are three clear and easily accessible knobs under tasks settings. The full-load tuning max tables (default 5) as well as the commit rate (default 10,000 which also influences CDC), and the change processing tuning setting  'force apply' (defaulting to 500MB). Some folks tune those way higher than needed with a single high setting for all tasks. Furthermore these settings, notably the CDC one perhaps can be trimmed down for a DEV/QA box and only opened up for Prod which may have much more memory available. I'd suggest an 'recpctl exportrepository' for all task and search the JSON for a quick overview. Relevant settings are:

	"task_settings":	{
		"full_load_sub_tasks":	42,
		"source_settings":	{
		},
		"target_settings":	{
			"max_transaction_size":	1234,
:
		"common_settings":	{
			"batch_apply_memory_limit":	1234,

 

btw...  if you wonder why the values chosen are  '1234'? That's just a trick to easily find a setting name in the JSON. Pick a 'strange' number and search for it. 

The information you provided only showed the 'end': how the system responded. You need to learn to focus on the beginning: what did you ask the system to do and why were those choices made. 'dunno' is not an acceptable reason. 

Hein.