Installation procedure for DX UIM in a Microsoft Windows cluster environment
search cancel

Installation procedure for DX UIM in a Microsoft Windows cluster environment


Article ID: 44270


Updated On:


DX Unified Infrastructure Management (Nimsoft / UIM) CA Unified Infrastructure Management On-Premise (Nimsoft / UIM) CA Unified Infrastructure Management SaaS (Nimsoft / UIM)


  • Installation procedure for UIM on Microsoft cluster environment.



  • DX UIM:  8.3, 8.4, 8.51 or higher
  • Windows Cluster 2008 and 2012
  • SQL Server Cluster 2008, 2012, and 2014


UIM has two different approaches for fail-over configuration.

  1. HA - using a primary hub and a secondary hub with the HA probe where UIM is installed on Microsoft Cluster. When CA UIM is installed on primary and secondary hubs with the HA probe, the HA probe manages failover.
  2. Microsoft Windows Cluster (MSCS) - the Windows Cluster method creates a virtual IP address where none of the CA UIM components need to be reconfigured to point to the failover node.

This document explains the failover configuration for Microsoft Windows cluster machines using a virtual IP address.

It is always highly recommended to use MS SQL Server cluster for high availability of the database.


  1. All the cluster nodes should have administrative privileges
  2. Should have specific shared drive between the nodes
  3. Make sure that you have one specific IP address which is not being used by any service/host

Follow the steps listed below to install DX UIM on both of the nodes:

  1. Run the UIM installer on the active node. (Make sure that the shared drive should be able to access on active node)
    a) Select the installation folder path as <Shared Drive>:\Nimsoft.
    b) Specify the hostname or IP address as the physical IP address of the system.
    c) Create a new database (by default CA_UIM) for Microsoft SQL Server. Please note that the SQL server user should have administrative privileges as same as 'sa' user (by default we can also give sa username and password).
    d) Make note of the domain and hub names. You will need this information when you install UIM Server on the secondary node as both cluster nodes share the same hub name.
    e) Wait for the installation to complete. In case of any warnings, please ignore them.
  2. Once primary node installation is completed, reboot the server, so that the secondary node will be active and shared drive will be accessible.
  3. Make sure that you are able to access the shared drive on the secondary node.
  4. Start installing UIM on the secondary node. This will ensure that all the registry entries and dll/exe's are properly referenced on the secondary node as well.
    a) Select the installation folder path as <Shared Drive>:\Nimsoft.
    b) Specify the domain name and hub name as given in the primary node.
    c) Specify the hostname or IP address as the physical IP address of the system.
    d) Make use of the database created on the primary node and provide the username and password for authentication.
    Note: In case the UIM installation fails, verify whether the shared disk is still accessible and open the robot.cfg file and make sure that the robotip and hubip is pointing to the secondary node's physical IP address. Then you can initiate the installer again.
  5. Once the installation is completed on the secondary node, restart the node to make the primary node active.

 Configure the NMS Robot Watcher OR Generic Service for failover nodes:

  • On the active node, Go to "Start"->"Administrative Tools"->"Failover Cluster Manager".
  • Expand + sign and select Services and Applications. Right-click on "Services and Applications", click Configure a Service or Application, and select Generic Service.
  • In the "Select Service" dialog, please select "Nimsoft Robot Watcher" and click next.
  • In the Client Access Point dialog, specify the virtual IP given in pre-requisites.
  • In the Select Storage dialog, enter or select the shared drive where the UIM Server is installed.

Install IM:

  1. Download the installer for NimBUS Manager and get installed on active node.
  2. Make sure to stop and start "Nimsoft Service Controller"

Deploy and Configure the Robot:

  1. You need to get the latest robot_update 5.70HF1 or 7.91 or a later version from the support team or web archive as these releases support IP virtualization.
  2. Download robot_update (5.70HF1 or 7.91 or later version)  to your local archive.
  3. Two robots are available on the hub, one for each node of the cluster. Ensure that you deploy the robot to the active node. 
  4. Deploy the new robot package to the existing primary hub robot. Go to archive and right-click on the right frame of the window and click on import and select the file copied in the local drive. Note: The distribution process can report that the deployment finished with unknown status but this message can be ignored and you can move forward.
  5. Edit the robot configuration:
    1. Open robot.cfg file present under <shared drive>:\Nimsoft\robot folder.
    2. Make the below changes.
    3. hubip =<Nimsoft_Service_virtual_IP> (As per the pre-requisites)
    4. robotip=<Nimsoft_Service_virtual_IP>
    5. strict_ip_binding=no (default)
    6. local_ip_validation=no (default)
    7. Create the NIMBUS_LOCAL_IP system environment variable on both cluster nodes. Set it to the virtual IP address.

 Configure the Primary Hub:

  1. Open the hub configuration file hub.cfg present under <shared drive>:\Nimsoft\robot folder.
  2. Add the line below in the <hub> section.


Restart the Nimsoft Robot Watcher Service:

  1. Open the Failover Cluster Manager on the active node.
  2. Right-click on the service created (NMS Robot Watcher or Generic Service) earlier and select the option to take it offline.
  3. Right-click on the service created earlier and select the option to take it online.
  4. Right-click the Robot Watcher service and click Properties.
  5. Go to "Dependencies" tab, and set the dependencies with cluster shared disk and virtual IP address.

 Configure Single Robot to Represent the Cluster:

  1. Log in to Infrastructure Manager. You will find two robots for each node of the cluster. The robot on the active node should be green, and the robot on the passive node is typically red.
  2. Double-click the controller probe on the active node.
  3. Under Setup Options, click Set Specific Name. Specify a unique name for the robot. Recommend using the name of the Robot Watcher service, rather than the physical hostname. Then click apply and click yes to restart the probe.
  4. Right-click the robot that is on the second node and select Remove. This will get deleted from the list of registered robots for the hub and prevents the generation of alarms due to the red (passive) state.

Additional Information

  • If any of the probes managed by the robot are red, or display a message for Invalid security, right-click the probe and select Security -> Validate.

  • Finally, you can test/validate the cluster by forcing a failover.