Only one uplink is being utilized for vSAN traffic while using Route Based on Physical NIC Load with an Active/Active vmnic Configuration.
search cancel

Only one uplink is being utilized for vSAN traffic while using Route Based on Physical NIC Load with an Active/Active vmnic Configuration.

book

Article ID: 394368

calendar_today

Updated On:

Products

VMware vSAN

Issue/Introduction

When using Dedicated vmnics for vSAN traffic in a Active/Active configuration and using "Route Based on Physical NIC Load" only one vmnic is active at any one time for vSAN traffic. 

 

 

Environment

Vmware vSAN(all versions)

Cause

Per the following document we see that "Route Based Physical NIC Load"  will move the VMKport to a new vmnic when that vmnic is under contention by multiple services but as these vmnics are dedicated to vSAN only,  load will not be moved as there is no other contention on the vmnics.  This policy will not load balance vSAN traffic across multiple vmnics at the same time.   

Route Based on Physical NIC Load 

We see how Route Based on Physical NIC Load works below 

 

Route Based on Physical NIC Load is based on Route Based on Originating Virtual Port, where the virtual switch monitors the actual load of the uplinks and takes steps to reduce load on overloaded uplinks. This load-balancing method is available only with a vSphere Distributed Switch, not on vSphere Standard Switches.

The distributed switch calculates uplinks for each VMkernel port by using the port ID and the number of uplinks in the NIC team. The distributed switch checks the uplinks every 30 seconds, and if the load exceeds 75 percent, the port ID of the VMkernel port with the highest I/O is moved to a different uplink.

Pros

No physical switch configuration is required.
Although vSAN has one VMkernel port, the same uplinks can be shared by other VMkernel ports or network services. vSAN can benefit by using different uplinks from other contending services, such as vMotion or management.

Cons
As vSAN typically only has one VMkernel port configured, its effectiveness is limited.
The ESXi VMkernel reevaluates the traffic load after each time interval, which can result in processing overhead.

Resolution

In order to utilize bandwidth across Multiple VMnics a more Advanced NIC Teaming configuration will need to be deployed. 

Please review the following documents on the advanced Nic teaming and select an option that works for you. 

Advanced NIC Teaming

Utilizing some sort of link aggregation such as LACP to gain additional bandwidth using multiple uplinks. 

Static and Dynamic Link Aggregation