get cluster status
returning all UP
via CLI
, and UI shows the cluster as stablelocation manager
tab, the newly added standby GM nodes show the status None
whereas we expect to see the status Standby
https://<Standby_GM_IP/FQDN>/global-manager/api/v1/global-infra/global-managers/<Standby_GM_Display_Name
returns NONE
for the mode property./api/v1/sites
, it was observed that the LM configuration is still pointing to the old standby GM nodes that were replaced.
VMware NSX
The primary cause of the issue was the disconnection of the standby Global Manager from Site Manager, which prevents proper synchronization between the Global Managers and leads to an incomplete off-boarding process.
Execute the off-boarding API of the Standby Global Manager node using the following command from the Active Global Manager CLI as root:
curl -X POST -ik http://localhost:7441/api/v1/sites?action=offboard_remote \ -H "Content-Type: application/json" \ -d '{"credential": {"ip": "", "port": 443, "username": "", "password": "", "thumbprint": ""}, "site_id": "#####################################"}'
The site_id
can be retrieved by running a GET request: GET https://<Active NSX Global Manager FQDN or IP>/api/v1/sites?version=latest
, and selecting the site that needs to be off-boarded.
Once the off-boarding API is executed, the stale standby Global Manager entry will automatically be removed from the UI. Afterward, delete the GM in "NONE" mode via the UI and re-add it. This will correctly register it with the "Standby" status.