Disaster Recovery and RE-IP of VMware Identity Manager
search cancel

Disaster Recovery and RE-IP of VMware Identity Manager

book

Article ID: 377774

calendar_today

Updated On:

Products

VMware Aria Suite

Issue/Introduction

This article aims to provide the outline of DR plan for both standard and clustered VMware Identity Manager (vIDM) set up. 

Environment

VMware Identity Manager 3.3.x

Resolution

Note:

  1. If viDM is being managed by VMware Aria Suite Lifecycle (vASL), it needs to be powered off before vASL is powered off for DR failover from the source site and needs to be powered ON after vASL post DR failover on the destination site.
  2. If vIDM is being used as authentication provider for other Aria Products such as Aira Automation, Aria Operations, etc, then vIDM needs to be powered off post powering off the dependent Aria Products on the source site prior DR failover and will be powered ON prior to powering ON the dependent Aria Products in the destination site post DR failover. 

DR and RE-IP of vIDM Standard Deployment

Failover of vIDM Standard Deployment:

    1. Recover vIDM VM.
    2. Update the DNS mappings for New IP in DNS servers to the existing Hostname: From the existing name server mapping we need to remove the old IP and add the new IP against the existing product hostname.
    3. SSH into the vIDM node using root credentials.
    4. Update the vami_config_net with IP, gateway, DNS, and netmask:
      Use the command -
      /opt/vmware/share/vami/vami_config_net

    5. Update New DNS address with the command:
      /opt/vmware/share/vami/vami_set_dns DNS1<space>DNS2

    6. Run the script 'updateIP.sh' on the vIDM node. The script is available in the attachments section of the KB article. 
    7. Wait for 20 minutes for all the services to come up and validate the vIDM UI and SSH.
    8. Post failover of vASL, Run vIDM Inventory Sync in vASL to update the new IP address in the vASL inventory
      • If a failure occurs at the snapshot update task skip the task in the request and proceed.
      • Trigger Inventory Sync again to make sure there are no failures.

Failback of vIDM Standard Deployment:

    1. Re-protect (SRM option) or reverse the direction of the failover and Run the disaster recovery of the vIDM instance using SRM or DR tool of choice.
    2. Check if all the changes are reverted back to the original. if not, follow the below steps and update the configurations.
      1. Update the DNS mappings in DNS servers for Original IP's to existing Hostnames.
        • Remove the new IP to Hostname Mappings and add the old IP against the mapping.
      2. SSH into the vIDM node using root credentials to update network configurations:
        • Update the vami_config_net with IP, gateway, DNS, and netmask:
          Use the command -
          /opt/vmware/share/vami/vami_config_net

        • Update New DNS address with the command:
          /opt/vmware/share/vami/vami_set_dns DNS1<space>DNS2

    3. Run the script 'updateIP.sh' on the vIDM node. The script is available in the attachments section of the KB article. 
    4. Wait for 20 minutes for all the services to come up and validate the vIDM UI and SSH.
    5. Post failover of vASL, Run vIDM Inventory Sync in vASL to update the new IP address in the vASL inventory if being managed by vASL.
      • If a failure occurs at the snapshot update task skip the task in the request and proceed.
      • Trigger Inventory Sync again to make sure there are no failures.

 

DR and RE-IP of vIDM Clustered Deployment

Failover of vIDM DR Clustered Deployment:

    1. Recover the vIDM cluster.
    2. Graceful Shutdown of only services of vIDM and not powering off the nodes. (KB78815)
            Follow Section: "Clustered VMware Identity Manager" Step 7 of KB ONLY.
    3. Update the DNS mappings in DNS servers for New IP's to existing Hostnames.
      • Remove the Old IP to Hostname Mapping
    4. Update the Load Balancer with new IP's.
      • Virtual Server and Pool entries of IP addresses should be updated to new IP's
    5. Check the /etc/hosts file for 'master' and 'delegateIP' values on all the nodes and update to recovered site IP's.
      • Edit /etc/hosts and update DelegateIP and Cluster master IP values for all the 3 nodes
      • All the 3 Nodes should have identical values.
    6. Change hard-coded Cluster Node IP's to New set of IP's at /usr/local/etc/pgpool.conf file on all the Node.
      • 'Master' from Step 5 should be 'Primary' value below .
      • The following properties in the above file, need to update with New IP's respectively, Start from Primary node as mentioned in Step 5

        1. backend_hostname0 - Primary Node
          backend_hostname1 - Secondary Node
          backend_hostname2 - Secondary Node
        2. heartbeat_destination0  - other than the node being edited 
          heartbeat_destination1  - other than the node being edited
        3. other_pgpool_hostname0  - other than the node being edited 
          other_pgpool_hostname1  - other than the node being edited

    7. Update the value of "VIDM_NETMASK" in /usr/local/etc/failover.sh file to the new netmask on all the 3 nodes
    8. Update the vIDM IP, Gateway, Netmask, and DNS in the vCenter OVF properties for all the 3 Nodes.
      • if the OVF properties option is not available, Update the Values using VAMI CONFIG
        • Update New IP address in the vami_config_net 
          Using the command -
          /opt/vmware/share/vami/vami_config_net

        • Update New DNS address with the command:
          /opt/vmware/share/vami/vami_set_dns DNS1<space>DNS2
        • Validate the network configurations: /etc/systemd/network/10-eth0.network

      • Validate the DNS entries are assigned. execute 'resolvectl' on all the Nodes and check the DNS servers list.

    9. Power OFF the VMs in vCenter.
    10. Power ON the VM's in vCenter.
      • Check the Networpk values udated are persisted after reboot.
    11. Assign the Delegate IP on Master
      • Run the following command:
        ifconfig eth0:0 inet delegateIP netmask <netmask>
      • Restart horizon-workspace service on all nodes:
        service horizon-workspace restart 
      • Identify the pgpool password by running:
        cat /usr/local/etc/pgpool.pwd
      • Run the command, this will generate a list of all configured nodes with corresponding postgres status.
        su root -c "echo -e 'password'|/opt/vmware/vpostgres/current/bin/psql -h localhost -p 9999 -U pgpool postgres -c \"show pool_nodes\""

        Command parameters help
        -h : The host against which the command would be run, here it would be 'localhost' 
        -p : The port on which Pgpool accepts connections, here its 9999
        -U : The Pgpool user, which is pgpool
        -c : The command to run, which is 'show pool_nodes'

    12. Run the script 'updateIP.sh' on Master Node. The script is available in the attachments section of the KB article. 
    13. Check the vIDM diagnostics status in the console (Start ElasticSearch service if vIDM version 3.3.6 or older and OpenSearch service if vDIM is 3.3.7, if not started on all the nodes)
    14. Post failover of vASL, Run vIDM Inventory Sync in vRLCM to update the new IP address in the vASL inventory
        • If a failure occurs at the snapshot update task skip the task in the request and proceed.
        • Trigger Inventory Sync again to make sure there are no failures.
    15. Re-trust the LB certificate on vIDM using vASL actions.(Required if LB certificate is updated)

Failback of VIDM Clustered Deployment:

    1. Re-protect (SRM option) or reverse the direction of the failover and Run the disaster recovery of the vIDM cluster nodes using SRM.
    2. Graceful Shutdown of only services of vIDM and not powering off the nodes. (KB78815)
            Follow Section: "Clustered VMware Identity Manager" Step 7 of KB ONLY.
    3. Update the DNS mappings in DNS servers for Original IP's to existing Hostnames.
      • Remove the new IP to Hostname Mapping
    4. Update the Load Balancer with Original IP's.
      • Virtual Server and Server Pool IP's
    5. Change hard-coded Cluster Node IP's to New set of IP's at /usr/local/etc/pgpool.conf file on all the Node
      • The following properties in the above file, need to update with New IP's respectively
        1. backend_hostname0 - Primary Node
          backend_hostname1 - Secondary Node
          backend_hostname2 - Secondary Node
        2. heartbeat_destination0  - other than the node being edited 
          heartbeat_destination1  - other than the node being edited
        3. other_pgpool_hostname0  - other than the node being edited 
          other_pgpool_hostname1  - other than the node being edited

    6.  Update the value of "VIDM_NETMASK" in /usr/local/etc/failover.sh file to the new netmask on all the 3 nodes
    7. Edit /etc/hosts and update DelegateIP and Cluster master IP values for all the 3 nodes
      • Virtual Server and Pool entries of IP addresses should be updated to new IP's
    8. Update the Gateway, Netmask, and DNS in the vCenter OVF properties for all the 3 Nodes.
      •  Update the DNS entries using command-
        • /opt/vmware/share/vami/vami_set_dns DNS1<space>DNS2
      • Update the DNS entries in vAPP options- Vcenter.
      • Update Gateway, Netmask in the vami_config_net file
        Using the command -
        • /opt/vmware/share/vami/vami_config_net
      • Validate the network configurations: /etc/systemd/network/10-eth0.network

    1. Power off the VMs in vCenter.
    2. Power ON the VM's in vCenter.
      • Check the Network values updated are persisted after reboot.
    3. Assign Delegate IP on Master
      • Use the command:
        ifconfig eth0:0 inet delegateIP netmask <netmask>
      • Restart horizon-workspace service on all nodes:
        service horizon-workspace restart 
      • Identify the pgpool password by running:
        cat /usr/local/etc/pgpool.pwd
      • Run the command, this will generate a list of all configured nodes with corresponding postgres status.
        su root -c "echo -e 'password'|/opt/vmware/vpostgres/current/bin/psql -h localhost -p 9999 -U pgpool postgres -c \"show pool_nodes\""

        Command parameters help
        -h : The host against which the command would be run, here it would be 'localhost' 
        -p : The port on which Pgpool accepts connections, here its 9999
        -U : The Pgpool user, which is pgpool
        -c : The command to run, which is 'show pool_nodes'

    4. Run the script 'updateIP.sh' on Master Node. The script is available in the attachments section of the KB article. 
    5. Check the vIDM diagnostics status in the console (Start ElasticSearch service if vIDM version 3.3.6 or older and OpenSearch service if vDIM is 3.3.7, if not started on all the nodes)
    6. Post failover of vASL, Run vIDM Inventory Sync in vASL to update the new IP address in the vASL inventory
      • If a failure occurs at the snapshot update task skip the task in the request and proceed.
      • Trigger Inventory Sync again to make sure there are no failures.
    7. Re-trust the LB certificate on vIDM using vASL actions.(Required if LB certificate is updated) 

Additional Information

  • Disaster Recovery and RE-IP of VMware Aria Suite Lifecycle, click here.
  • Disaster Recovery and RE-IP of VMware Aria Automation. click here.

Attachments

updateIP.sh get_app