Host requirements for link aggregation (etherchannel, port channel, or LACP) in ESXi
book
Article ID: 324555
calendar_today
Updated On:
Products
VMware vCenter ServerVMware vSphere ESXi
Issue/Introduction
These concepts are used in an ESXi network environment. To accomplish network redundancy, load balancing, and fail-over, you must:
Enable link aggregation on the physical switch. Note: Link aggregation is also known as Ether-Channel, Ethernet trunk, port channel, LACP, vPC, and Multi-Link Trunking.
Set up the ESXi Virtual Switch configuration to be compatible with these concepts.
Environment
VMware vSphere ESXi 7.x VMware vSphere ESXi 8.x
Resolution
ESXi requirements and limitations for link aggregation:
ESXi host only supports NIC teaming on a single physical switch or stacked switches.
Link aggregation is never supported on disparate trunked switches.
The switch must be set to perform 802.3ad link aggregation in static mode ON and the virtual switch must have its load balancing method set to Route based on IP hash.
Enabling either Route based on IP hash without 802.3ad aggregation or vice-versa disrupts networking, so you must make the changes to the virtual switch first. That way, the service console is not available, but the physical switch management interface is, so you can enable aggregation on the switch ports involved to restore networking.
Note: Due to network disruption, changes to link aggregation should be done during a maintenance window.
Do not use for iSCSI software multipathing. iSCSI software mulitpathing requires just one uplink per vmkernel, and link aggregation gives it more than one.
Do not use beacon probing with IP HASH load balancing.
Do not configure standby or unused uplinks with IP HASH load balancing.
VMware supports only one Etherchannel bond per Virtual Standard Switch (vSS).
In vSphere Distributed Switch 5.5 and later, all load balancing algorithms of LACP are supported.
ESXi load balancing should match the physical switch load balancing algorithm. For questions on which load balancing algorithm the physical switch uses, please refer to the physical switch vendor.
Limitations of LACP in vSphere:
LACP is only supported on vSphere Distributed Switches.
LACP is not supported for software iSCSI mulitpathing.
LACP configuration settings are not present in Host Profiles.
Running LACP inside any guest OS (including nested ESXi hosts) is not supported.
LACP cannot be used in conjunction with the ESXi Dump Collector.
For this feature to work, the vmkernel port used for management purposes must be on a vSphere Standard Switch.
Port Mirroring cannot be used in conjunction with LACP to mirror LACPDU packets used for negotiation and control.
The teaming health check does not work for LAG ports as the LACP protocol itself is capable of ensuring the health of the individual LAG ports. However, VLAN and MTU health check can still check LAG ports.
You can create up to 64 LAGs on a distributed switch. A host can support up to 64 LAGs.
Note: the number of LAGs that you can actually use depends on the capabilities of the underlying physical environment and the topology of the virtual network. For example, if the physical switch supports up to four ports in an LACP port channel, you can connect up to four physical NICs per host to a LAG.
LACP is currently unsupported with SR-IOV.
Basic LACP (LACPv1) is only supported on vSphere versions 6.5 or below. Upgrading ESXi to 7.0 may result in the physical switch disabling the LAG ports on ESXi hosts using Basic LACP.
Upgrading LACP from v1 to v2 is an automated two-step process which may result in a transient connectivity issue. The Management vmknic (For example, vmk0) should be migrated to another vDS or vSS prior to the upgrade.
LACP Compatibility with vDS
vCenter Server version
vDS version
Host version
Compatibility
vCenter Server 8.0
vDS 8.0/7.0/6.6
ESXi 8.0
Only supports LACP v2
vCenter Server 7.0
vDS 7.0/6.6/6.5
ESXi 7.0
Only supports LACP v2
Note: As with any networking change, there is a chance for network disruption so a maintenance period is recommended for changes. This is especially true on a vSphere Distributed Switch (vDS) because the Distributed Switch is owned by vCenter and the hosts alone cannot make changes to the vDS if connection to vCenter is lost. Enabling LACP can complicate vCenter or host management recovery in production down scenarios, because the LACP connection may need to be broken to move back to a Standard Switch if necessary (since LACP is not supported on a Standard Switch).