After performing a V2T migration, vDS portgroups exist named nsx.LogicalSwitch:UUID
search cancel

After performing a V2T migration, vDS portgroups exist named nsx.LogicalSwitch:UUID

book

Article ID: 318292

calendar_today

Updated On:

Products

VMware NSX Networking

Issue/Introduction

Symptoms:
  • vSphere 7.0
  • An NSX Data Center for vSphere environment has been migrated to NSX-T Data Center using the Migration Coordinator
  • As part of the migration a vDS portgroup was migrated to a segment
  • Post migration there is also a vDS portgroup called "nsx.LogicalSwitch:<UUID>"
  • Checking the UUID in NSX-T UI search it maps to the segment
  • Therefore there are 3 portgroups
    • the original vDS portgroup that existed before migration
    • the new NSX segment portgroup
    • the portgroup "nsx.LogicalSwitch:<UUID>"
  • Some VMs on the segment connect to the segment portgroup as expected but some connect to the "nsx.LogicalSwitch:<UUID>" portgroup


Environment

VMware NSX-T Data Center
VMware NSX-T Data Center 3.x

Cause

Due to a software issue on the ESXi host side, after an NSX-V to NSX-T migration there may a problem with vDS portgroup backend mappings. This results in portgroups of name nsx.LogicalSwitch:<UUID>.

Resolution

This issue is resolved in VMware ESXi 7.0 Update 2.

Workaround:
To workaround the issue

For each problematic portgroup
  • Identify a problem portgroup to be rectified, nsx.LogicalSwitch:<UUID>
  • On the NSX-T UI, search for the UUID and confirm the name of the segment it corresponds to
  • From the vSphere client delete the original unused vDS portgroup that the VMs on this segment were migrated from
  • On the vSphere client, from networking view, select the "nsx.LogicalSwitch:UUID" portgroup and select the VM tab
  • For each VM, right click, select Edit Settings and click OK (Even though no change is made it triggers a refresh)
  • Once all VMs have been refreshed the "nsx.LogicalSwitch:UUID" portgroup should automatically be removed

Note: Snapshots on affected VMs will interfere with this workaround. Any snapshots will need to be consolidated for affected VMs.