Unable to remove obsolete vCenter (Compute Manager) in NSX due to stale AVI appliance
search cancel

Unable to remove obsolete vCenter (Compute Manager) in NSX due to stale AVI appliance

book

Article ID: 428002

calendar_today

Updated On:

Products

VMware NSX

Issue/Introduction

  • The vCenter is no longer in use and has since been removed from the environment.
  • In NSX under System, Fabric, Compute Managers the status shows as down and last update time references when the Compute Manager was decommissioned.
  • When trying to delete the Compute Manager, you get an error in the UI:

Compute Manager delete failed.: Compute manager 9b645a07-####-####-####-############ is in use. Please delete nsx managed VM(s) [<VM-name>] and then retry deleting compute manager.

  • An AVI load balancer was deployed in the NSX interface some time in the past and has since been removed, the <VM-name> refers to the name of the AVI appliance which was deployed and is now removed.
  • Running the below API call, does not remove the stale node and still unable to delete the compute manager:

POST /api/v1/transport-nodes?action=clean_stale_entries

Environment

VMware NSX

Cause

When the AVI load balancer was removed, a stale entry was left in the NSX corfu database, this was deployed on the Compute Manager and thus prevents the Compute Manager from being deleted.

Please note, there is another KB which deals with stale edge nodes, if the stale VM is an edge node, please review this KB Failed to delete Compute Manager in NSX-T

Resolution

If you believe you have encountered this issue, open a support case with Broadcom Support and refer to this KB article.

For more information, see Creating and managing Broadcom support cases.

 

When opening the case with Broadcom NSX support, please provide:

  • The NSX manager logs, all 3.
  • Date and time when delete of old compute manger was attempted, with timezone.
  • Name and UUID of the old compute manager.
  • Results from the following corfu database exports, to be run as root user on one of the NSX managers:
    • /opt/vmware/bin/corfu_tool_runner.py -n nsx -o showTable -t DeploymentUnitInstance > /root/DeploymentUnitInstance.dump
    • /opt/vmware/bin/corfu_tool_runner.py -o showTable -n nsx -t AlbControllerNodeVmDeploymentRequest > /root/AlbControllerNodeVmDeploymentRequest.dump
    • /opt/vmware/bin/corfu_tool_runner.py -o showTable -n nsx -t DeploymentUnit > /root/DeploymentUnit.dump
    • /opt/vmware/bin/corfu_tool_runner.py -o showTable -n nsx -t GenericPolicyRealizedResource > /root/GenericPolicyRealizedResource.dump
    • /opt/vmware/bin/corfu_tool_runner.py -o showTable -n nsx -t Alarm > /root/Alarm.dump
  • And results from the following API calls, run as root user on one of the NSX manager, these commands will ask for the admin password:
    • curl -k -u admin https://localhost/api/v1/fabric/compute-managers > /root/compute-managers.api
    • curl -k -u admin https://localhost/api/v1/cluster/nodes/deployments > /root/deployments.api
    • curl -k -u admin https://localhost/api/v1/transport-nodes > /root/transport-nodes.api
  • The above corfu and API calls will save the output as .api and .dump in the /root/ directory of the NSX manager where they where run on, please use WinSCP or other to export them and upload to the case when opening the case, they will not be exported as part of the log collection process.