Host loses management network connection after moving to datacenter level
search cancel

Host loses management network connection after moving to datacenter level

book

Article ID: 406831

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

  • The host is configured with NSX and uses an N-VDS for networking.
  • The management VMkernel interface (vmk0) is hosted on the N-VDS.
  • As part of NSX reprepare troubleshooting, the host was moved from the cluster to the datacenter level.
  • After being moved out of the cluster, VMK0 lost network connectivity.

Environment

VMware NSX-T datacenter

VMware vSphere ESXi

Cause

The host lost management network connectivity because the management VMkernel interface (vmk0) was configured on an N-VDS and the uninstall network mapping was either not defined or pointed to a port group with incorrect configuration.

When a host is moved out of an NSX-managed cluster or NSX is uninstalled/re-prepared, the uninstall network mapping determines where VMkernel interfaces are reassigned. If this mapping is missing or incorrect, the VMkernel interface is not migrated to a valid standard or non-NSX port group, leaving the host without a functional management network.

Resolution

  1. Verify Uninstall Network Mappings before moving the host out of cluster
  2. Confirm that the assigned portgroup have the correct network configurations.

If the issue has already occurred and the host is out of network, recreate the management VMkernel interface (vmk0) on a standard vSwitch connected to a valid physical uplink and VLAN. This will restore management connectivity.

Prerequisites:

  • Access the ESXi DCUI console physically or via remote KVM/iLO/DRAC.

  • Enable ESXi Shell or SSH from DCUI before proceeding.

  • Ensure you have console access throughout the procedure to prevent loss of access.

1. Verify and Remove NSX Components
esxcli software vib list | grep -i nsx

If NSX VIBs still exist, remove them manually:
nsxcli -c del nsx

Confirm VIBs are successfully removed:
esxcli software vib list | grep -i nsx

2. Check Current Network Configuration
List VMkernel interfaces:
esxcli network ip interface list
OR
esxcfg-vmknic -l

View vSwitch configuration:
esxcfg-vswitch -l

3. Remove vmk0
esxcli network ip interface remove --interface-name=vmk0

4. Free Physical Uplink(s) if Needed
If the desired vmnic is still connected to a Distributed vSwitch, remove it first:
esxcfg-vswitch -Q <vmnic#> -V <dvPort_ID_of_vmnic> <dvSwitch>

5. Create/Identify a Standard Switch
If an existing standard switch is not available, create one:
esxcli network vswitch standard add --vswitch-name=<vSwitchname>

6. Create a Portgroup for Management
esxcli network vswitch standard portgroup add --portgroup-name=<portgroupname> --vswitch-name=<vSwitchname>

If VLAN tagging is required, specify --vlan-id=<id>

7. Recreate the Management VMkernel Adapter
esxcli network ip interface add --interface-name=vmk0 --portgroup-name=<portgroupname>

Assign the management IP:
esxcli network ip interface ipv4 set --interface-name=vmk0 --ipv4=<ipaddress> --netmask=<netmask> -g=<gatewayip> --type=static

8. Configure Routing
View routes:
esxcfg-route -l

Add default route if missing:
esxcfg-route -a default <default-gateway-ip>

9. Attach Physical Uplink
esxcli network vswitch standard uplink add --uplink-name=<vmnic#> --vswitch-name=<vSwitchname>

Set uplink as active:
esxcli network vswitch standard portgroup policy failover set -a <vmnic#> -v <vSwitchname>

10. Validate Connectivity
Check routing and connectivity:
vmkping -I vmk0 <gateway-ip>

To monitor network traffic:
esxtop
(Press 'n' for network view)

11. Reconfigure NSX on the host
Move the ESXi host back to the cluster to reconfigure with NSX.

Note: If the N-VDS to C-VDS migration is already completed, move vmk0 back to the VDS and reattach the uplink to the VDS before preparing the host for NSX.

 

Additional Information