This is a known issue that requires the workaround in this article
Workaround:
You can either reinstall ESXi host from scratch or create a new vSwitch and management vmkernel
Sometimes creating a new vSwitch will not work
See KB Configuring vSwitch or vNetwork Distributed Switch from the command line in ESXi/ESX for information steps on creating the vSwitch via the CLI
Process to remove the stale N-VDS before upgrading
The below process has been confirmed in VMware lab environment to be non-disruptive.2
However, VMware always recommends that customers make changes of this nature during a maintenance window.3
Additionally VMware recommends that customers always ensure full backups prior to any maintenance window.4
net-dvs -l | less
Note: you may have to go down many pages to get to the NSX DVS switch details
Look for the following output - Note: This Example is the NSX DVS configuration as seen from support bundles
-----SNIP----
switch cb cf 7b 58 37 50 43 b2-8b f1 95 b3 88 0f 0b a9 (vswitch)
max ports: 10752
global properties:
com.vmware.common.opaqueDvs = true, propType = CONFIG
com.vmware.common.alias = nvds-overlay-acdc, propType = CONFIG
com.vmware.common.uplinkPorts:
uplink-1, uplink-2
propType = CONFIG
com.vmware.common.portset.mtu = 9000 , propType = CONFIG
com.vmware.etherswitch.cdp = LLDP, listen
propType = CONFIG
com.vmware.common.respools.version = version3 , propType = CONFIG
---END SNIP---
net-dvs -d nvds-overlay-acdc
net-dvs --persist <----- if you forget this step, the stale nvds will reappear after reboot!!!!
net-dvs -l | grep -i nsx
Note1: These steps should not be executed without VMware Support actively engaged
Note2: The steps included in the above process are non-disruptive in lab environments. VMware always recommends making changes like this under a maintenance window and any business change controls already in effect, to ensure that unexpected events do not have a unexpected impact on production.
Note3: VMware recommends full backups prior to any maintenance activity