Host requirements for link aggregation (etherchannel, port channel, or LACP) in ESXi
search cancel

Host requirements for link aggregation (etherchannel, port channel, or LACP) in ESXi

book

Article ID: 324555

calendar_today

Updated On:

Products

VMware vCenter Server VMware vSphere ESXi

Issue/Introduction

These concepts are used in an ESXi network environment.

Network redundancy, load balancing, and fail-over can be accomplished a number of different ways in ESXi.  

Link aggregation (etherchannel, port channel, or LACP) is NOT the default configuration for ESXi when it is freshly installed.

If you wish to use Link aggregation (etherchannel, port channel, or LACP) , then there are two aspects to be addressed.

PHYSICAL SWITCH(ES) TO WHICH THE ESXI HOST'S UPLINKS (vmnics) ARE CONNECTED

  • Enable link aggregation on the physical switch.
    Note: Link aggregation is also known as Ether-Channel, Ethernet trunk, port channel, LACP, vPC, and Multi-Link Trunking.

 

ESXi HOST Settings:

1) Scenario 1:  Etherchannel (also called portchannel)

  • Etherchannel (also called portchannel) can be used WITHOUT LACP, on Standard switches.
  • LACP stands for Link Aggregation Control Protocol, and it is a protocol with an objective of ensuring that all uplinks in a LAG (Link Aggregation Group) are regularly checked to ensure that their link states are understood consistently by both an ESXi host (the vmnics) and the Physical Switch(es) and Switchport(s) to which the vmnics are connected.
  • An ESXi Virtual Switch configuration can be set up to handle Link Aggregation WITHOUT LACP.

2) Scenario 2:  LACP with Link Aggregation

  • This is only supported with vSphere Distributed Switches (vDS)

NOTES: 

  • There are significant differences between each of these scenarios. 
  • Often, there is confusion in that people think that all of these terms (Etherchannel, portchannel, LACP, LAG) are synonyms for each other, which is not the case.

Environment

VMware vSphere ESXi 7.x
VMware vSphere ESXi 8.x

Resolution

ESXi requirements and limitations for link aggregation (all Scenarios):

  • ESXi host only supports NIC teaming on a single physical switch or stacked switches.
    • Link aggregation is never supported on disparate trunked switches.

 

  • The switch must be set to perform 802.3ad link aggregation in static mode ON and the virtual switch must have its load balancing method set to "Route based on originating virtual port"
    • Note: If etherchannel or port channel is configured (WITHOUT LACP), then the load balancing algorithm should be Route based on IP hash. 
    • Note: Due to network disruption, changes to link aggregation should be done during a maintenance window.

 

  • Do not use Link Aggregation for iSCSI software multipathing. iSCSI software mulitpathing requires just one uplink per vmkernel, and link aggregation gives it more than one.

 

  • Do not use beacon probing with "Route based on IP hash" load balancing.

 

  • Do not configure standby or unused uplinks with "Route based on IP hash"load balancing.

 

  • ESXi supports only one Etherchannel / portchannel bond per Virtual Standard Switch (vSS). 

 

 

IMPORTANT NOTE:

    • ESXi load balancing should match the physical switch load balancing algorithm.
  •  
    • For questions on which load balancing algorithm the physical switch uses, please refer to the physical switch vendor.

Limitations of LACP in vSphere:

  • LACP is only supported on vSphere Distributed Switches.

 

  • LACP configuration settings are not present in Host Profiles. 

 

  • Running LACP inside any guest OS (including nested ESXi hosts) is not supported.

 

  • LACP cannot be used in conjunction with the ESXi Dump Collector.
    • For this feature to work, the vmkernel port used for management purposes must be on a vSphere Standard Switch.

 

  • Port Mirroring cannot be used in conjunction with LACP to mirror LACPDU packets used for negotiation and control.

 

  • The teaming health check does not work for LAG ports as the LACP protocol itself is capable of ensuring the health of the individual LAG ports. However, VLAN and MTU health check can still check LAG ports.

 

 

  • You can create up to 64 LAGs on a distributed switch. A host can support up to 64 LAGs.
    • Note: the number of LAGs that you can actually use depends on the capabilities of the underlying physical environment and the topology of the virtual network. For example, if the physical switch supports up to four ports in an LACP port channel, you can connect up to four physical NICs per host to a LAG.

 

  • LACP is currently unsupported with SR-IOV.

 

Notes:

  • As with any networking change, there is a chance for network disruption so a maintenance period is recommended for changes.
  • This is especially true on a vSphere Distributed Switch (vDS) because the Distributed Switch is owned by vCenter and the hosts alone cannot make changes to the vDS if connection to vCenter is lost.
  • Enabling LACP can complicate vCenter or host management recovery in production down scenarios, because the LACP connection may need to be broken to move back to a Standard Switch if necessary (since LACP is not supported on a Standard Switch).

Additional Information

Additional Information

For translated versions of this article, see: