Organizations often need to migrate networking between vSphere Standard Switches (vSS) and vSphere Distributed Switches (vDS) to centralize management or simplify host configurations. Without a validated procedure, migrating management interfaces (vmk0) and virtual machines in a production environment risks:
Temporary loss of host management.
Network isolation for production Virtual Machines.
vCenter disconnection.
VMware vSphere
The primary challenge during migration is maintaining a live network path while moving physical uplinks (vmnics) between different switch architectures. In a standard setup with two physical NICs (vmnic0 and vmnic1), moving both simultaneously will cause an immediate outage.
Successful migration requires a staggered approach, utilizing one NIC for the source switch and one for the destination switch during the transition.
Note : Mixed-Speed Teaming Issues: Combining 1GbE and 10GbE in the same Active/Active team can lead to unpredictable load balancing and performance bottlenecks, as the 1GbE link may become a bottleneck if the 10GbE fails.
Migrating from a vSphere Standard Switch (vSS) to a vSphere Distributed Switch (vDS) is a standard procedure for centralizing network management. Moving the vMotion service to its own TCP/IP stack at the same time is a best practice for performance and routing flexibility.
Note : Put the Host in Maintenance Mode .
Phase 1: Create and Prepare the Distributed Switch
You must first create the logical structure of the vDS before bringing the hosts into it.
Create the vDS: Navigate to Networking > Right-click Datacenter > Distributed Switch > New Distributed Switch.
Choose the version that matches your ESXi hosts (e.g., 8.0).
Configure the number of uplinks (match the physical NICs on your hosts).
Create Distributed Port Groups: Create port groups on the vDS that mirror your vSS (e.g., Management-vDS, vMotion-vDS, VM-Network-vDS).
Crucial: Ensure the VLAN IDs match your current vSS ( standard switch ) port group settings exactly.
Phase 2: Add Host and Migrate Networking (Zero Downtime)
To avoid downtime, you will migrate one physical uplink at a time.
Launch the Wizard: Right-click your new vDS > Add and Manage Hosts > Add hosts.
Select Hosts: Choose the ESXi host(s) you want to migrate.
Manage Physical Adapters: Select one "redundant" physical NIC (e.g., vmnic1) that is currently on the vSS.
Assign Uplink: Assign it to an Uplink on the vDS.
Result: The host now has one path on VSS (vmnic0) and one on vDS (vmnic1).
Manage VMkernel Adapters: Select vmk0 (Management). Click Assign port group and select the new Management-vDS port group.
Skip vMotion for now (we will handle the TCP/IP stack in the next phase).
Migrate VM Networking: Select the VMs and assign them to the new vDS port groups.
Finish: Complete the wizard. vCenter will move the Management traffic and VMs to the vDS through vmnic1.
Phase 3: Configure vMotion with Dedicated TCP/IP Stack
By default, vMotion uses the "Default" TCP/IP stack. To move it to the dedicated vMotion Stack (which provides a separate routing table and heap), follow these steps:
[!NOTE] You cannot "edit" the TCP/IP stack of an existing VMkernel adapter in the UI. You must delete the old one and create a new one.
Remove the old vMotion VMkernel: Go to Host > Configure > Networking > VMkernel adapters.
Note down the IP and Subnet of the old vMotion adapter (usually vmk1), then Delete it.
Create the New vMotion Adapter:
Click Add Networking > VMkernel Network Adapter.
Select the Distributed Port Group you created for vMotion.
The Key Step: In "Port properties," find the TCP/IP stack dropdown and select vMotion stack.
Set the IP and Subnet. You will notice you can now set a Dedicated Gateway specifically for vMotion traffic.
Finish: The new vmk is now active on the isolated vMotion stack.
Phase 4: Cleanup and Redundancy
Migrate the Last Uplink: Now that the host is communicating via vDS, go to the vDS > Add and Manage Hosts > Manage host networking.
Assign the remaining physical NIC (e.g., vmnic0) to the vDS.
Verify: Perform a test vMotion.
Verify management connectivity.
Check that the old Standard Switch (vSwitch0) is now empty and can be deleted.
vMotion Stack Rule: Once a host has a VMkernel on the vMotion stack, vCenter will only use that stack for vMotion. Any adapters left on the "Default" stack with the vMotion service enabled will be ignored.
Rollback: If you lose connectivity during Phase 2, ESXi has a "Network Rollback" feature that should automatically revert the changes if it cannot reach vCenter for 30–60 seconds.
Steps to use the vMotion Stack with vDS
Navigate to the Host in the vSphere Client.
Go to Configure > Networking > VMkernel adapters.
Click Add Networking.
Select VMkernel Network Adapter and click Next.
On the Select target device page, choose Select an existing network.
Click Browse and select the Distributed Port Group you created on your vDS.
Now, on the Port properties page, the TCP/IP stack dropdown will be visible. Select vMotion stack.
[NOTE]
Once you select the vMotion stack, the services checkboxes (Management, vMotion, etc.) will be greyed out because this stack is hard-coded for vMotion traffic only.
Complete the IP configuration.
Why use the dedicated vMotion Stack?
If you are migrating back to a Standard Switch (vSS) or staying on a vDS, using the dedicated stack provides two major advantages:
Dedicated Default Gateway: You can route vMotion traffic across different subnets without needing to add static routes to the host's management routing table.
Traffic Isolation: It uses a separate set of buffers and sockets, preventing high-bandwidth vMotion tasks from impacting management or storage traffic stability.
Important: One Stack Per Host
If you already have a VMkernel adapter on the Default stack with the "vMotion" service checked, ESXi will prioritize that until you create an adapter on the vMotion stack. Once the vMotion stack adapter is active, the vMotion service on the default stack is automatically deactivated for future migrations.