Best Practice to delete additional compute manager from NSX-T
search cancel

Best Practice to delete additional compute manager from NSX-T

book

Article ID: 317515

calendar_today

Updated On:

Products

VMware NSX

Issue/Introduction

  • NSX-T has two compute managers registered. For example compute manager A and compute manager B
  • Use case is to delete/remove one of the compute manager which is compute manager B.
  • Trying to delete the compute manager B throws the below error,

Compute manager <compute-manager id> is in use. Please delete nsx managed VM(s) [mgrnode2,mgrnode3] and then retry deleting compute manager.

  • This is expected, when the manager nodes 2 and 3 are auto-deployed from NSX-T UI.

NSX-T manager Deployment types

  • Manual Deployment. - Deploying the manager node vm's from vsphere UI as OVA
  • Auto deployment - Deploying the manager nodes from NSX-T UI
  • This auto deployment is only possible for 2nd and 3rd manager nodes. The first manager node is always manual deployment.
  • Once you access first manager nodes UI, we get an option to autodeploy the 2nd and 3rd nodes. system --. appliances --> add nsx appliance.

  • You can also verify the deployment type by checking system --. appliances --> view details

 

Environment

VMware NSX-T Data Center

Resolution

  • If all three managers are manually deployed, then you can directly delete the additional compute manager from NSX-T UI (It should be in disconnected status)
  • If the second and third nodes are auto deployed, Only then do you need to follow the below procedure.
  1. (If applicable) uninstall nsx from ESX hosts managed by compute manager B
  2. Delete the nsx manager appliances (likely appliance 2 and 3) deployed via nsx manager from nsx manager UI. If the manager nodes are manually deployed then the delete button will be greyed out.  
  3. Manually deploy (via ova file) nsx manager virtual appliances and rejoin nsx manager cluster from cli

Additional Information

  • The use of CLI "detach node <nodeId>"  leaves stale entries in release 3.2.01, 3.2.1,3.2.2, 4.0.0.1
  • And it is fixed from 3.2.2.1 and 4.0.1