Can not delete in use compute manager <UUID> due to NSX deployed VM(s) [[<NSX component names>]]
Compute Manager delete failed: Compute manager <UUID> is in use. Please delete nsx managed VM(s) [<NSX component names>]
VMware NSX-T 3.x and NSX 4.x
As per administration guide, any components (NSX Managers, Edge Nodes, NSX Intelligence, Service Virtual Machines (SVM)) deployed by a compute manager, need to be disassociated from the compute manager before the compute manager be removed from NSX: Add a Compute Manager - Results
The alert is a warning that this compute manager still has some components deployed via it and therefore cannot be removed.
You need to move the components to a new compute manager, then you should be able to remove the compute manager.
You can identify which compute manager was used to deploy NSX manager nodes via the vc_id
value returned from the GET
https://<nsx-mgr-IP/FQDN>/api/v1/cluster/nodes/deployments
API call.
Sample (truncated) output:
"deployment_config": {
"vc_id": "2fd34a13-####-####-####-fcc5d41f7db4",
"compute_id": "<domain-id>",
"storage_id": "<datastore-id>",
"host_id": "<host-id>",
"management_network_id": "<dvportgroup-id>",
In this sample output, the vc_id
value is noted as 2fd34a13-####-####-####-fcc5d41f7db4
. This can be checked against the NSX manager UUID value on the System > Configuration > Appliances page, under the View Details link for each of the NSX manager nodes.
You can identify which compute manager was used to deploy NSX edge nodes via the vc_id
value returned from the GET
https://<nsx-mgr-IP/FQDN>/api/v1/transport-nodes/
API call.
This API call will bring back all of the transport nodes present, only Edge node VMs deployed via a VC will contain a vc_id and can be checked. Alternatively each individual edge node can be checked viat the specific Edge id via https://<nsx-mgr-IP/FQDN>/api/v1/transport-nodes/<Edge UUID>
Sample (truncated) output:
"node_deployment_info": {
"deployment_type": "VIRTUAL_MACHINE",
"deployment_config": {
"vm_deployment_config": {
"vc_id": "2fd34a13-####-####-####-fcc5d41f7db4",
"compute_id": "<domain-id>",
"storage_id": "<datastore-id>",
"management_network_id": "<dvportgroup-id>",
In this sample output, the vc_id
value is noted as 2fd34a13-####-####-####-fcc5d41f7db4
. This can be checked against the NSX manager UUID value on the System > Configuration > Appliances page, under the View Details link for each of the NSX manager nodes.
Note: Before performing the above steps, ensure you have up to date backups and the cluster is in a healthy state before each component is removed.
Checking NSX manager cluster status, as admin run:
get cluster status
Ensure all services are up and cluster is formed.
In certain circumstances, these components may have already been moved, but you still get the alerts.
This has been seen to happen when stale components exist in the NSX-T database (corfu).
These stale entries may exist due to an incorrect removal/replacement of the components in the past.
The following API can be used to find the compute manager used for each component:
Note: As the first manager in an NSX cluster is deployed by OVF, it will not show in the list.
Note: You can deploy all NSX manager nodes and edge appliances via OVF and then join them to the rest of the NSX installation. This will result in none of these components being associated with a compute manger and avoiding this behavior in the future. See Form an NSX Manager Cluster Using the CLI and Install NSX Edge on ESXi Using the Command-Line OVF Tool for detailed instructions.
If above API results shows components on the new compute manager and none on the old compute manager and you still receive the alerts, please open a support request and refer to this KB article.