oi_connector and apm_bridge setup and best practices
search cancel

oi_connector and apm_bridge setup and best practices

book

Article ID: 412443

calendar_today

Updated On:

Products

CA Unified Infrastructure Management On-Premise (Nimsoft / UIM) CA Unified Infrastructure Management SaaS (Nimsoft / UIM) DX Unified Infrastructure Management (Nimsoft / UIM)

Issue/Introduction

This article includes guidance on setting up the oi_connector and apm_bridge for integrating DX UIM into DXOI/DXO2.

Environment

  • DX UIM 23.4.*
  • DXOI / DX02

Best practice for latest versions to run as of February 2026:

Cause

  • Setup and configuration guidance

Resolution


oi_connector deployment and configuration


  1. Deployment

    The oi_connector must be on a robot connected to/under the Primary hub but ideally not on the Primary hub robot itself. It can also be installed to a secondary hub connected directly to the Primary hub.

    The oi_connector and apm_bridge should be co-located on the same robot.

    If both probes are already on the Primary Hub, then establish and use a dedicated robot for both the oi_connector and apm_bridge and deactivate the oi_connector on the Primary.

     

Steps required to install the oi_connector on a robot under the Primary hub

For an oi_connector instance deployed on a remote robot machine connected to the primary hub, the following keys must be set via Raw configure:

    • data_engine_address
    • primary_hub_address
    • connect_to_primary_hub

The values of the following parameters can be set from Infrastructure Manager -> Setup -> Raw Configure. To do so, deactivate if the running probe is running, set the parameter values, and activate the probe again.

For example:

data_engine_address = /<domain_name>/ <primary_hub_name>/<primary_robotname>/data_engine

primary_hub_address = /<domain_name>/ <primary_hub_name>/<primary_robotname>/hub

connect_to_primary_hub true

     2. Configuration

         Add/adjust the following recommended keys and value:

skip_second_pass_ci_fetch = true


The keys listed below should already be present and set in the oi_connector but it's good to double-check and adjust their values as well.

get_ci_details_alive_time_days = 30 but this may need to be tested by setting it lower, e.g., 10

qos_bulk_size = 5000

qos_payload_bulk_size = 5000

get_ci_details_by_met_id_list = true

task_count = 1000

bulk_size = 2000

db_conn_max_pool_size = 15   (default is 10)

       This oi_connector parameter needs to be set to false but under good processing could be set to true.

         enable_alive_time_batches = false
         

       Set loglevel = 1 or 2



  • The Hub Status ->Queued column should remain low when refreshed and not continue to build.
  • The queue should remain green not turn yellow.
  • The Sent column should be continuously sending QOS messages.

      3. Check the log for OutOfMemory errors

Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded 
  • If any Java memory errors exist add 2GB or more to the min and the max and deactivate the probes
  • Java best practice: keep the min and max within 2 GB of each other
  • Wait for the port and PID to drop
  • Activate the probes.

      4. CI cache refresh interval setting

 Currently, the CI cache refresh runs every 30 minutes. In environments with large metadata volume (1.2M+ entries and growing), this frequent
 refresh gradually consumes:

    • Database connections
    • I/O resources
    • CPU cycles

  Over time, this can lead to resource exhaustion and increase the likelihood of thread starvation.

  As the data volume continues to grow, the refresh interval should be adjusted accordingly.

  We recommend updating: ci_cache_update_thread_interval_minutes

    • Change from 30 to 300 (5 hours)
    • Ideally to 1440 (24 hours) for large stable environments

  Refreshing once every 24 hours significantly reduces DB pressure compared to every 30 minutes.

  Note that there is a file (oi_connector setup parameters) attached to this article which includes a partial extract to serve as a short example
  reference for the <setup> section parameter values used in a very large environment which processes millions of messages.


apm bridge deployment and configuration
  • When the apm_bridge is deployed on the same robot as the oi_connector, it needs the following parameters updated with the relevant information:

primary_hub_address = /<domain_name>/ <primary_hub_name>/<primary_robotname>/hub
connect_to_primary_hub = true



  • In apm_bridge 2.03 and earlier the inventory_alive_time_days parameter is only configurable in the range of 1 to 7.

    The inventory related to certain probes (such as url_response and net_connect) are not being published because these probes do not periodically update the alive_time in the cm_computer_system table. Therefore, with these settings the inventory count in DX UIM may not correspond/differs to inventory count in DXO2 as it would be missing those devices.

    To avoid this, the patch for apm_bridge included in this KB can be deployed: Not all devices / inventory are published from DX UIM to DXO2 

After following the steps above, if the issue persists, attach the oi_connector.log and apm_bridge.log and data_engine.log to your support case.


Compatibility of Deployment on Secondary Hubs and Robots Connected to the Primary Hub

  • Configure the security certificate on the secondary hub (hub->robot) or robot under the hub

    • The certificate (certificate.pem) is normally stored on the Primary Hub in the ../nimsoft/security folder.
       
  • Copy the certificate FROM the Primary hub TO the robot where the probe is deployed and add the path of the certificate pointing to the target robot's controller configuration file .../opt/nimsoft/robot/robot.cfg

    For example, you may take a look at the Primary Hub's controller configuration file.

    Linux robot example:

       cryptkey = /opt/nimsoft/security/certificate.pem
     

    Windows robot example:

     
     

 

Upgrade oi_connector and apm_bridge

To upgrade oi_connector and amp_bridge follow the Best Practices to upgrade oi_connector and apm_bridge

Additional Information


Network
When the oi_connecter and apm_bridge are offloaded/deployed on a separate robot under the Primary hub, best practice is that the robot is on the same subnet.

Java memory settings
Set the apm_bridge java min/max memory settings under the Startup->options section to at least 2g and 4g respectively. In larger environments with millions of QOS messages being processed through the queue, the java min/max may be set as high as 14/16GB RAM respectively but its best to track the memory usage and see how much memory the probe takes versus what it appears to need. Use Windows Task Manager and/or top command (Linux/Unix).

Robot system sizing (oi_connector and apm_bridge)
Processor 3GHz or higher recommended

  • 12 cores
  • 16-GB RAM (Consider increasing the max heap by 4 GB to use the reconciliation feature)
  • SSD: 200-GB or more

oi_connector queue processing
Also note that the axaqueuegateway.uimQos queue processing can benefit from the addition of more virtual processors on the system in cases where the probe may be having difficulty with qos event processing, and/or throwing errors such as:

[QOS_PROCESSOR_THREAD-337, oi_connector] Error while posting the qos data net.sf.ehcache.CacheException: Faulting from repository failed


Related KBs/Documents: 

    Attachments

    oi_connector setup parameters for reference get_app