qos_processor queue files (*.sds) are consuming more disk space
search cancel

qos_processor queue files (*.sds) are consuming more disk space


Article ID: 236648


Updated On:


DX Unified Infrastructure Management (Nimsoft / UIM) DX Unified Infrastructure Management (Nimsoft / UIM) CA Unified Infrastructure Management SaaS (Nimsoft / UIM) Unified Infrastructure Management for Mainframe


At one hub server, in the directory D:\Nimsoft\hub\q\qos_processor_qos_message many .sds files are getting creating automatically and consuming all the disk space.

What is step to delete this and stop the automatic creation of this files?


Release : 20.3


qos_processor v20.10


- qos_processor ATTACH queues defined on a remote hub


First, make sure you have enough physical memory to spare and increase the startup options of the probe:






and set the qos_processor

   message-receiver-bulk-size to 60      #which is the default

Then cold start the probe (Deactivate-Activate).

As a fallback, if none of he above works as expected you might have a corrupted message in the qos_processor's .sds file. That .sds file is just a copy of the messages so it can be deleted after stopping the robot, if need be. Then activate the robot again.

To empty the qos_processor queue, open the hub probe GUI and click Status.

Then rt-click on the qos_processor message queue and choose Empty. You may have to do this a few times to clear it and get it processing again.

In this case, upon further examination for the issue of *.sds file building up on a hub from qos_processor...

As it turns out, qos_processor ATTACH queues were defined and existed on another hub (NOT the Primary) but the qos_processor probe was not installed there! It may have been a leftover artifact from a previous installation.

Of course, the only place for the qos_processor probe to be installed is on the Primary and an HA Hub but on the HA hub the probe would be deactivated until it failover to the HA Hub.

So local *.sds files were building up over time and using up all the disk space as they were being processed by an instance of the qos_processor queue on that remote hub but there was no local qos_processor probe deployed on that remote hub (and there shouldn't be anyway...).

In the meantime, the qos_processor_qos_message queue on the Primary would process an increasing count of messages and then the message count would drop unexpectedly. It would drop to a lower number then build again.

Once I deleted the qos_processor ATTACH queues on the remote hub, the issue was resolved.