Failed to delete Compute Manager in NSX-T
search cancel

Failed to delete Compute Manager in NSX-T

book

Article ID: 319043

calendar_today

Updated On:

Products

VMware NSX

Issue/Introduction

  • You are attempting to remove a vCenter server (Compute Manager) which is being decommissioned or has already been decommissioned.
  • This old vCenter is linked to NSX as compute manager (CM).
  • You may see messages similar to the following in the NSX UI when attempting to delete this compute manager:
Can not delete in use compute manager <UUID> due to NSX deployed VM(s) [[<NSX component names>]]

Compute Manager delete failed: Compute manager <UUID> is in use. Please delete nsx managed VM(s) [<NSX component names>]
 

Environment

VMware NSX-T 3.x and NSX 4.x

Cause

As per administration guide, any components (NSX Managers, Edge Nodes, NSX Intelligence, Service Virtual Machines (SVM)) deployed by a compute manager, need to be disassociated from the compute manager before the compute manager be removed from NSX: Add a Compute Manager - Results

The alert is a warning that this compute manager still has some components deployed via it and therefore cannot be removed.

Resolution

You need to move the components to a new compute manager, then you should be able to remove the compute manager.

  • For edge nodes:
    • Deploy a new edge node via the new compute manager. Once configured, swap it in the edge cluster with the edge on the old compute manager  and then delete the edge on the old compute manager.
    • Repeat this for other edge nodes identified.
  • For manager nodes:
    • As the management cluster can function with 2 managers, delete one of the NSX managers identified and deploy a fresh manager node (configured same as old manager) on the new compute manager.
      Note: See Installing NSX Manager Cluster on vSphere for detailed instructions.
    • Repeat for any further managers still on the old compute manager identified.
  • For NSX intelligence and SVM:
    • You will need to remove the deployment and deploy again on the new compute manager.

You can identify which compute manager was used to deploy NSX manager nodes via the vc_id value returned from the GET https://<nsx-mgr-IP/FQDN>/api/v1/cluster/nodes/deployments API call.

Sample (truncated) output:

            "deployment_config": {
                "vc_id": "2fd34a13-####-####-####-fcc5d41f7db4",
                "compute_id": "<domain-id>",
                "storage_id": "<datastore-id>",
                "host_id": "<host-id>",
                "management_network_id": "<dvportgroup-id>",

In this sample output, the vc_id value is noted as 2fd34a13-####-####-####-fcc5d41f7db4. This can be checked against the NSX manager UUID value on the System > Configuration > Appliances page, under the View Details link for each of the NSX manager nodes.

You can identify which compute manager was used to deploy NSX edge nodes via the vc_id value returned from the GET https://<nsx-mgr-IP/FQDN>/api/v1/transport-nodes/ API call.
This API call will bring back all of the transport nodes present, only Edge node VMs deployed via a VC will contain a vc_id and can be checked. Alternatively each individual edge node can be checked viat the specific Edge id via https://<nsx-mgr-IP/FQDN>/api/v1/transport-nodes/<Edge UUID>

Sample (truncated) output:

    "node_deployment_info": {
        "deployment_type": "VIRTUAL_MACHINE",
        "deployment_config": {
            "vm_deployment_config": {
                "vc_id": "2fd34a13-####-####-####-fcc5d41f7db4",
                "compute_id": "<domain-id>",
                "storage_id": "<datastore-id>",
                "management_network_id": "<dvportgroup-id>",

In this sample output, the vc_id value is noted as 2fd34a13-####-####-####-fcc5d41f7db4. This can be checked against the NSX manager UUID value on the System > Configuration > Appliances page, under the View Details link for each of the NSX manager nodes.


Note: Before performing the above steps, ensure you have up to date backups and the cluster is in a healthy state before each component is removed.
Checking NSX manager cluster status, as admin run:

get cluster status

Ensure all services are up and cluster is formed.

Additional Information

In certain circumstances, these components may have already been moved, but you still get the alerts.
This has been seen to happen when stale components exist in the NSX-T database (corfu).
These stale entries may exist due to an incorrect removal/replacement of the components in the past.
The following API can be used to find the compute manager used for each component:

  • List all CM and their ID's:
    • /nsxapi/api/v1/fabric/compute-managers
  • List all managers and CM used:
    • /nsxapi/api/v1/cluster/nodes/deployments


Note: As the first manager in an NSX cluster is deployed by OVF, it will not show in the list.

Note: You can deploy all NSX manager nodes and edge appliances via OVF and then join them to the rest of the NSX installation. This will result in none of these components being associated with a compute manger and avoiding this behavior in the future. See Form an NSX Manager Cluster Using the CLI and Install NSX Edge on ESXi Using the Command-Line OVF Tool for detailed instructions. 

If above API results shows components on the new compute manager and none on the old compute manager and you still receive the alerts, please open a support request and refer to this KB article.