This article provides solution for the following scenarios related to oi_connector queues:
axagateway.uimQos and axagateway.alarms queues are yellow / stuck / not draining / disconnections / instability
Errors below may be seen in oi_connector.log:
Feb 24 12:46:47:600 [QUEUE_MONITOR_THREAD, oi_connector] QOS Subscription is either null or not Ok so Reconnecting to hub queue...Feb 24 12:46:47:601 [QUEUE_MONITOR_THREAD, oi_connector] QOS subscription is unavailable on Primary HubFeb 24 12:46:47:602 [QUEUE_MONITOR_THREAD, oi_connector] Primary Hub is available: /<domain>/<hub>/<robot>/hubFeb 24 12:46:47:602 [QUEUE_MONITOR_THREAD, oi_connector] subscribe to queue hub address is /<domain>/<hub>/<robot>/hubFeb 24 12:46:47:602 [QUEUE_MONITOR_THREAD, oi_connector] inside subscribe to queue hub address is /<domain>/<hub>/<robot>/hubFeb 24 12:46:48:602 [QUEUE_MONITOR_THREAD, oi_connector] Queue is not subscribe, Nass is not available : axagateway.uimQosFeb 24 12:46:48:602 [QUEUE_MONITOR_THREAD, oi_connector] New subscriber object constructed for : Queue[axagateway.uimQos].Feb 24 12:46:48:657 [attach_clientsession, oi_connector] Retaining Data in the QoS Queue, as effectiveTaskCount reaches to max limit.
Additional information on the scenario:
Possible Causes:
Possible Resolutions:
ci_cache_update_thread_interval_minutes: change from 30 to 1440System resources:
Also note that the axagateway.uimQos queue processing can benefit from the addition of more virtual processors on the system in cases where the probe may be having difficulty with qos event processing, and/or throwing errors such as:
[QOS_PROCESSOR_THREAD-337, oi_connector] Error while posting the qos data net.sf.ehcache.CacheException: Faulting from repository failed
Additional questions and Answers about queue instability:
cm_configuration_item_definition and cm_configuration_item. This indicates a large-scale/high-end environment.
Related KBs/Documents: