Installation procedure for UIM on Microsoft Windows cluster environment
search cancel

Installation procedure for UIM on Microsoft Windows cluster environment

book

Article ID: 44270

calendar_today

Updated On:

Products

DX Unified Infrastructure Management (Nimsoft / UIM)

Issue/Introduction



Installation procedure for UIM on Microsoft cluster environment.

 

Environment

UIM:  8.3, 8.4 and 8.51

Windows Cluster 2008 and 2012

SQL Server Cluster 2008, 2012, and 2014

Resolution

UIM has two different approaches to fail-over configuration.
  1. Using a primary hub and a secondary hub with the HA probe where UIM is installed on Microsoft Cluster. When CA UIM is installed on primary and secondary hubs with the HA probe, the HA probe manages failover.
  2. The Windows Cluster method creates a virtual IP address where none of the CA UIM components need to be reconfigured to point to the failover node.

This document explains about the failover configuration for Microsoft Windows cluster machines using virtual IP address. It is always recommended to use SQL Server cluster for high availability of the database.

Prerequisites:



  1. All the cluster nodes should have administrative privileges.
  2. Should have specific shared drive between the nodes.
  3. Make sure that you should have one specific IP address which is not being used by any service / host.


Follow the below steps to install the UIM on both the nodes:



  1. Run the CA UIM installer on the active node. (Make sure that the shared drive should be able to access on active node)a) Select the installation folder path as <Shared Drive>:\Nimsoft.
    b) Specify the hostname or IP address as the physical IP address of the system.
    c) Create new database (by default CA_UIM) for Microsoft SQL Server. Please note that the SQL server user should have administrative privileges as same as 'sa' user (by default we can also give sa username and password).
    d) Make note of the domain and hub names. You will need this information when you install CA UIM Server on the secondary node as both cluster nodes share the same hub name.
    e) Wait for the installation to complete. In case of any warnings, please ignore.
  2. Once primary node installation is completed, reboot the server. So that the secondary node will be active and shared drive will be accessible.
  3. Make sure that you should be able to access shared drive on secondary node.
  4. Start installing UIM on secondary node. Which will ensure that all the registry entries and dll/exe's are properly referenced on the secondary node as well.a) Select the installation folder path as <Shared Drive>:\Nimsoft.
    b) Specify the domain name and hub name as given in primary node.
    c) Specify the hostname or IP address as the physical IP address of the system.
    d) Make use of the database created on primary node and provide the username and password for authentication.
    Note: In case if the UIM installation fails, need to verify whether shared disk is still accessible and open robot.cfg file and make sure that robotip and hubip should point to secondary node physical IP address. Then you can initiate the installer again.
  5. Once the installation is completed on secondary node, restart the node to make primary node active.

 Configure the NMS Robot Watcher OR Generic Service for failover nodes:

  • On the active node, Go to "Start"->"Administrative Tools"->"Failover Cluster Manager".
  • Expand + sign and select Services and Applications.Right click on "Services and Applications", click Configure a Service or Application and select Generic Service.
  • In the "Select Service" dialog, please select "Nimsoft Robot Watcher" and click next.
  • In the Client Access Point dialog, specify the virtual IP given in pre-requisites.
  • In the Select Storage dialog, enter or select the shared drive where the CA UIM Server is installed.


Install IM:



  1. Download the installer for NimBUS Manager and get installed on active node.
  2. Make sure to stop and start "Nimsoft Service Controller"


Deploy and Configure the Robot:



  1. You need to get the robot_update 5.70HF1 or 7.91 or later version from ca support team or web archive as these releases support IP virtualization.
  2. Download robot_update (5.70HF1 or 7.91 or later version)  to your local archive.
  3. Two robots are available on hub, one for each node of the cluster. Ensure that you deploy the robot to the active node. 
  4. Deploy the new robot package to the existing primary hub robot. Go to archive and right click on right frame of the window and click on import and select the file copied in local drive. Note: The distribution process can report that the deployment finished with unknown status but this message can be ignored and move forward.
  5. Edit the robot configuration:
    1. Open robot.cfg file present under <shared drive>:\Nimsoft\robot folder.
    2. Make the below changes.
    3. hubip =<Nimsoft_Service_virtual_IP> (As per the pre-requisites)
    4.  robotip=<Nimsoft_Service_virtual_IP>
    5. strict_ip_binding=no (default)
    6. local_ip_validation=no (default)
    7. Create the NIMBUS_LOCAL_IP system environment variable on both cluster nodes. Set it to the virtual IP address.

 Configure the Primary Hub:

  1. Open the hub configuration file hub.cfg present under <shared drive>:\Nimsoft\robot folder.
  2. Add the below line in <hub> section.

        bulk_size_floor=1
 Restart Robot Watcher Service:

  1. Open the Failover Cluster Manager on the active node.
  2. Right click on service created (NMS Robot Watcher or Generic Service) earlier and select option to take it offline.
  3. Right click on service created earlier and select option to take it online.
  4. Right-click the Robot Watcher service and click Properties.
  5. Go to "Dependencies" tab, set the dependencies with cluster shared disk and virtual IP address.

 Configure Single Robot to Represent the Cluster:

  1. Log in to Infrastructure Manager. You will find two robots for each node of the cluster. The robot on the active node should be green, the robot on the passive node is typically red.
  2. Double-click the controller probe on the active node.
  3. Under Setup Options, click Set Specific Name. Specify a unique name for the robot. Recommend using the name of the Robot Watcher service, rather than the physical hostname. Then click apply and click yes to restart the probe.
  4. Right-click the robot that is on the second node and select Remove. Which will get deleted from the list of registered robots for the hub and prevents the generation of alarms due to the red (passive) state.


 



 


 

 

Additional Information

If any of the probes managed by the robot is red, or shows invalid security, right-click the probe and select Security > Validate.
If any components are using auto-generated licenses, replace them with standard licenses.
Finally you can validate the cluster by making failover.