Skip to main content
Announcements
July 15, NEW Customer Portal: Initial launch will improve how you submit Support Cases. IMPORTANT DETAILS
cancel
Showing results for 
Search instead for 
Did you mean: 
RobertBARRAS
Contributor II
Contributor II

Qlik replicate task folder sorter

I am using Qlik replicate.

I have tasks that does CDC from db2 database to kafka on a topic per table.

The task does a Global Transformation to rename to topic  from TABLENAME to SCHEMA.TABLENAME.

On the server I found a folder with name sorter under the TASK folder, which contains large files of 30GB.

Those huge files fill my disk space and qlik interface got blocked.

What are the files under the folder sorter?

Can I delete them ?

 

Labels (2)
4 Replies
Maria_Halley
Support
Support

@RobertBARRAS 

I will move this to the Qlik Replicate board instead, so you reach the right audience for this post.

SwathiPulagam
Support
Support

Hi @RobertBARRAS ,

 

Sorter stores the transactions that arrive from the source database until they are committed, and sends them to the target database in the correct order (i.e. by commit time).
If you delete the files in the sorter then you will miss the data.

For more information on sorter please refer to the below user guide link:

https://help.qlik.com/en-US/replicate/November2021/Content/Replicate/Main/Replicate%20Loggers/Logger...

 

Thanks,

Swathi

SwathiPulagam
Support
Support

Hi @RobertBARRAS ,

 

Please refer to the below community article link for preventing Qlik Replicate services to go down when the disk space utilization is high

 

https://community.qlik.com/t5/Knowledge/How-to-prevent-Qlik-Replicate-services-to-go-down-when-the-d...

 

Thanks,

Swathi

Heinvandenheuvel
Specialist III
Specialist III

As @SwathiPulagam indicates, a large collection of large sorter files could be the holding area for a large source transaction. With 30GB and let's say 1KB/tx that would suggest 30 million records. Or, the output could be too slow or completely 'stuck' trying to re-connect for a long time. It's easy to figure out which situation is the case or whether there was a failure to clean up by going to the GUI - Task - Monitor - Change Processing and click on 'Incoming Changes'. Next you'll get 4 'elevator' bars indicating whether there is stuff believed staged to disk on the incoming side, or outgoing side or both.

If there is is a low number (less than a million) changes indicated then the tasks appears to have left behind some stuff that can probably be safely deleted. I'd look at the timestamps for those files. Are they from well before the  last (re)start? a week old or more? You can probably safely delete those old ones (and only the old ones), even 'on the fly' if the tasks is currently happy.

If there are lots of changes in flight, well, then you need to examine and explain that using what you know about your source, target and expected changes, and looking for help in the task log.

It may be best to stop and resume the tasks to see if it that cleans up.

If not, then stop, cleanup yourself and start by timestamp as starting by timestamp does NOT expect or use anything from the sorter area as it is a 'fresh' start.

Hein

Capture_change_processing_disk.JPG