Disaster Recovery and RE-IP of VMware Identity Manager
book
Article ID: 377774
calendar_today
Updated On:
Products
VMware Aria Suite
Issue/Introduction
This article aims to provide the outline of DR plan for both standard and clustered VMware Identity Manager (vIDM) set up.
Environment
VMware Identity Manager 3.3.x
Resolution
Note:
If viDM is being managed by VMware Aria Suite Lifecycle (vASL), it needs to be powered off before vASL is powered off for DR failover from the source site and needs to be powered ON after vASL post DR failover on the destination site.
If vIDM is being used as authentication provider for other Aria Products such as Aira Automation, Aria Operations, etc, then vIDM needs to be powered off post powering off the dependent Aria Products on the source site prior DR failover and will be powered ON prior to powering ON the dependent Aria Products in the destination site post DR failover.
DR and RE-IP of vIDM Standard Deployment
Failover of vIDM Standard Deployment:
Recover vIDM VM.
Update the DNS mappings for New IP in DNS servers to the existing Hostname: From the existing name server mapping we need to remove the old IP and add the new IP against the existing product hostname.
SSH into the vIDM node using root credentials.
Update the vami_config_net with IP, gateway, DNS, and netmask: Use the command - /opt/vmware/share/vami/vami_config_net
Update New DNS address with the command: /opt/vmware/share/vami/vami_set_dns DNS1<space>DNS2
Run the script 'updateIP.sh' on the vIDM node. The script is available in the attachments section of the KB article.
Wait for 20 minutes for all the services to come up and validate the vIDM UI and SSH.
Post failover of vASL, Run vIDM Inventory Sync in vASL to update the new IP address in the vASL inventory
If a failure occurs at the snapshot update task skip the task in the request and proceed.
Trigger Inventory Sync again to make sure there are no failures.
Failback of vIDM Standard Deployment:
Re-protect (SRM option) or reverse the direction of the failover and Run the disaster recovery of the vIDM instance using SRM or DR tool of choice.
Check if all the changes are reverted back to the original. if not, follow the below steps and update the configurations.
Update the DNS mappings in DNS servers for Original IP's to existing Hostnames.
Remove the new IP to Hostname Mappings and add the old IP against the mapping.
SSH into the vIDM node using root credentials to update network configurations:
Update the vami_config_net with IP, gateway, DNS, and netmask: Use the command - /opt/vmware/share/vami/vami_config_net
Update New DNS address with the command: /opt/vmware/share/vami/vami_set_dns DNS1<space>DNS2
Run the script 'updateIP.sh' on the vIDM node. The script is available in the attachments section of the KB article.
Wait for 20 minutes for all the services to come up and validate the vIDM UI and SSH.
Post failover of vASL, Run vIDM Inventory Sync in vASL to update the new IP address in the vASL inventory if being managed by vASL.
If a failure occurs at the snapshot update task skip the task in the request and proceed.
Trigger Inventory Sync again to make sure there are no failures.
DR and RE-IP of vIDM Clustered Deployment
Failover of vIDM DR Clustered Deployment:
Recover the vIDM cluster.
Graceful Shutdown of only services of vIDM and not powering off the nodes. (KB78815) Follow Section: "Clustered VMware Identity Manager" Step 7 of KB ONLY.
Update the DNS mappings in DNS servers for New IP's to existing Hostnames.
Remove the Old IP to Hostname Mapping
Update the Load Balancer with new IP's.
Virtual Server and Pool entries of IP addresses should be updated to new IP's
Check the /etc/hosts file for 'master' and 'delegateIP' values on all the nodes and update to recovered site IP's.
Edit /etc/hosts and update DelegateIP and Cluster master IP values for all the 3 nodes
All the 3 Nodes should have identical values.
Change hard-coded Cluster Node IP's to New set of IP's at /usr/local/etc/pgpool.conf file on all the Node.
'Master' from Step 5 should be 'Primary' value below .
The following properties in the above file, need to update with New IP's respectively, Start from Primary node as mentioned in Step 5
heartbeat_destination0 - other than the node being edited heartbeat_destination1 - other than the node being edited
other_pgpool_hostname0 - other than the node being edited other_pgpool_hostname1 - other than the node being edited
Update the value of "VIDM_NETMASK" in /usr/local/etc/failover.sh file to the new netmask on all the 3 nodes
Update the vIDM IP, Gateway, Netmask, and DNS in the vCenter OVF properties for all the 3 Nodes.
if the OVF properties option is not available, Update the Values using VAMI CONFIG
Update New IP address in the vami_config_net Using the command - /opt/vmware/share/vami/vami_config_net
Update New DNS address with the command: /opt/vmware/share/vami/vami_set_dns DNS1<space>DNS2
Validate the network configurations: /etc/systemd/network/10-eth0.network
Validate the DNS entries are assigned. execute 'resolvectl' on all the Nodes and check the DNS servers list.
Power OFF the VMs in vCenter.
Power ON the VM's in vCenter.
Check the Networpk values udated are persisted after reboot.
Assign the Delegate IP on Master
Run the following command: ifconfig eth0:0 inet delegateIP netmask <netmask>
Restart horizon-workspace service on all nodes: service horizon-workspace restart
Identify the pgpool password by running: cat /usr/local/etc/pgpool.pwd
Run the command, this will generate a list of all configured nodes with corresponding postgres status. su root -c "echo -e 'password'|/opt/vmware/vpostgres/current/bin/psql -h localhost -p 9999 -U pgpool postgres -c \"show pool_nodes\""
Command parameters help -h : The host against which the command would be run, here it would be 'localhost' -p : The port on which Pgpool accepts connections, here its 9999 -U : The Pgpool user, which is pgpool -c : The command to run, which is 'show pool_nodes'
Run the script 'updateIP.sh' on Master Node. The script is available in the attachments section of the KB article.
Check the vIDM diagnostics status in the console (Start ElasticSearch service if vIDM version 3.3.6 or older and OpenSearch service if vDIM is 3.3.7, if not started on all the nodes)
Post failover of vASL, Run vIDM Inventory Sync in vRLCM to update the new IP address in the vASL inventory
If a failure occurs at the snapshot update task skip the task in the request and proceed.
Trigger Inventory Sync again to make sure there are no failures.
Re-trust the LB certificate on vIDM using vASL actions.(Required if LB certificate is updated)
Failback of VIDM Clustered Deployment:
Re-protect (SRM option) or reverse the direction of the failover and Run the disaster recovery of the vIDM cluster nodes using SRM.
Graceful Shutdown of only services of vIDM and not powering off the nodes. (KB78815) Follow Section: "Clustered VMware Identity Manager" Step 7 of KB ONLY.
Update the DNS mappings in DNS servers for Original IP's to existing Hostnames.
Remove the new IP to Hostname Mapping
Update the Load Balancer with Original IP's.
Virtual Server and Server Pool IP's
Change hard-coded Cluster Node IP's to New set of IP's at /usr/local/etc/pgpool.conffile on all the Node
The following properties in the above file, need to update with New IP's respectively
Update Gateway, Netmask in the vami_config_net file Using the command -
/opt/vmware/share/vami/vami_config_net
Validate the network configurations: /etc/systemd/network/10-eth0.network
Power off the VMs in vCenter.
Power ON the VM's in vCenter.
Check the Network values updated are persisted after reboot.
Assign Delegate IP on Master
Use the command: ifconfig eth0:0 inet delegateIP netmask <netmask>
Restart horizon-workspace service on all nodes: service horizon-workspace restart
Identify the pgpool password by running: cat /usr/local/etc/pgpool.pwd
Run the command, this will generate a list of all configured nodes with corresponding postgres status. su root -c "echo -e 'password'|/opt/vmware/vpostgres/current/bin/psql -h localhost -p 9999 -U pgpool postgres -c \"show pool_nodes\""
Command parameters help -h : The host against which the command would be run, here it would be 'localhost' -p : The port on which Pgpool accepts connections, here its 9999 -U : The Pgpool user, which is pgpool -c : The command to run, which is 'show pool_nodes'
Run the script 'updateIP.sh' on Master Node. The script is available in the attachments section of the KB article.
Check the vIDM diagnostics status in the console (Start ElasticSearch service if vIDM version 3.3.6 or older and OpenSearch service if vDIM is 3.3.7, if not started on all the nodes)
Post failover of vASL, Run vIDM Inventory Sync in vASL to update the new IP address in the vASL inventory
If a failure occurs at the snapshot update task skip the task in the request and proceed.
Trigger Inventory Sync again to make sure there are no failures.
Re-trust the LB certificate on vIDM using vASL actions.(Required if LB certificate is updated)
Additional Information
Disaster Recovery and RE-IP of VMware Aria Suite Lifecycle, click here.
Disaster Recovery and RE-IP of VMware Aria Automation. click here.