Configuring LACP on a vSphere Distributed Switch Port Group
search cancel

Configuring LACP on a vSphere Distributed Switch Port Group

book

Article ID: 312554

calendar_today

Updated On:

Products

VMware vCenter Server

Issue/Introduction

This article provides information on configurating Link Aggregation Control Protocol (LACP) Support in vSphere Distributed Switches, as well as what to do if LACP will not enable on host uplinks on a vSphere Distributed Switch. 
​​​​​​

Resolution

 

Configuring LACP on an Uplink Port Group

Note: All port groups using the LAG Uplink Port Group enabled with LACP must have the load balancing policy set to Route Based on Originating Virtual Port, network failure detection policy set to link status only, and all uplinks set to active.
 
General Procedure:
  1. Create and configure a Link Aggregation Group (LAG).
  2. Connect the physical adapters to LAG that just got created.
  3. Configure teaming and failover specifically for LACP.
  4. Activate and Migrate LAG to desired host(s).

Detailed procedure:

  1. Open the vSphere Web client.
  2. Click the Networking view.
  3. Expand the Datacenter and select the Distributed Switch.
  4. In the LACP section, click the +NEW symbol to create a new LAG.
  5. Give the new LAG a name.
  6. Select the number of ports per host desired. Later you will connect a physical network adapter from each host to each port.
  7. Choose the LACP mode you want to use from the dropdown. Choose Active or Passive. In both scenarios, the physical switch should also be set to Active.
  8. Select a Load balancing algorithm. The chosen algorithm should follow suit with the physical switch. Check with the physical switch vendor for the best match.
  9. Click OK.
  10. Reassign network adapters from the available uplink adapters to the new LAG
    1. Move the new link aggregation group to Standby state for the distributed portgroups where you want to use LACP.
    2. Use the add and manage hosts wizard in template mode to migrate physical uplinks to the LAG on multiple hosts simultaneously: reassign uplinks on template host, then apply configuration to desired hosts.
    3. Set the LAG to the active state for the desired portgroups by moving it to the active class, then moving individual unassociated uplinks to the unused class (leaving the standby class empty).
Note: If the network adapters are currently used as uplinks for vmkernel port groups such as the management vmkernel, the network adapter may need to be "walked" over one adapter at a time to avoid losing connection to the management vmkernel. See the Workaround section below for more information.




Workaround:

Workaround: When LACP cannot be configured on host uplinks on the vSphere Distributed Switch

If the network adapters are currently used as uplinks for vmkernel port groups such as the management vmkernel, the network adapter may need to be "walked" over one adapter at a time to avoid losing connection to the management vmkernel.

Note: If there are other vmnics that can be used in the LAG, that would be easiest and fastest. If the current adapters must be moved to LAG, be aware that this process will take time and will require configuration at the same time on the physical switch ports. VMware highly recommends that the host be in Maintenance Mode during this process as well.

Here are the basic steps for walking currently used adapters over to the LAG adapter group.

Note: This is a sample. The actual adapters and vmnic numbers may be different for each environment:

  1. Identify the adapters (vmnics) that need to be moved to the LAG adapter group. For this example, we will use vmnic0 and vmnic1.
  2. On the physical switch, down the port connected to vmnic0.
  3. On the physical switch, place the port connected to vmnic0 in the LACP/Etherchannel configuration. 
  4. In the Add and Manage Hosts wizard, select the host and add vmnic0 in one of the LAG uplinks by selecting vmnic0 and clicking "Assign Adapter". Finish the wizard.
  5. On the physical switch, up the port connected to vmnic0.
  6. In the port groups on the Distributed Switch, move the LAG uplink group to active and move the regular uplinks to unused. Ensure the host stays connected during these changes.
  7. On the physical switch, down the port connected to vmnic1.
  8. Add the vmnic1 to the LAG uplink group on the vDS through the Add and Manage Hosts wizard. Also add the switch port connected to vmnic1 to the LACP/Etherchannel in the physical switch.
  9. On the physical switch, up the vmnic1 port again.

Introduction to LACP in VMware



VMware supports Link Aggregation Control Protocol (LACP) on vSphere Distributed Switch (VDS) only.

Note: Link Aggregation Control Protocol (LACP) can only be configured through the vSphere Web Client.

  • LACP is a standards-based method to control the bundling of several physical network links together to form a logical channel for increased bandwidth and redundancy purposes. LACP enables a network device to negotiate an automatic bundling of links by sending LACP packets to the peer.
  • LACP works by sending frames down all links that have the protocol enabled. If it finds a device on the other end of the link that also has LACP enabled, it also sends frames independently along the same links, enabling the two units to detect multiple links between themselves and then combine them into a single logical link.
  • Enhanced LACP support allows you to connect ESXi hosts to physical switches that use dynamic link aggregation.
    • Multiple Link Aggregation Groups (LAGs) can now be created on a single Distributed Switch to aggregate the available bandwidth of physical NICs connecting to LACP port channels.
    • When you create a LAG on a vSphere Distributed Switch (vDS), up to 24 LAG ports can be associated to it.
    • These LAG ports are similar in concept to dvUplink ports, which are essentially slots that are associated with actual physical uplinks on each of the ESXi hosts.
      • For example, in a two port LAG, the LAG ports have names similar to Lag1-0 and Lag1-1. Each ESXi host will have vmnics configured for each of these LAG ports.
    • Each LAG can be configured for either Active or Passive LACP modes.
      • In an Active LACP LAG, all ports are in an active negotiating state. The LAG ports initiate negotiations with the LACP port channel on the physical switch by sending LACP packets.
      • In Passive mode, the ports respond to LACPDUs they receive but do not initiate the LACP negotiation.
      • If the physical switch is configured for active negotiating mode, it can be left in Passive Mode.
      • If LACP needs to be enabled and Active mode is unavailable, it is highly possible that the Physical Switch is not configured for LACP.

vSphere LACP supports these load balancing types:

  • Destination IP address
  • Destination IP address and TCP/UDP port
  • Destination IP address and VLAN
  • Destination IP address, TCP/UDP port and VLAN
  • Destination MAC address
  • Destination TCP/UDP port
  • Source IP address
  • Source IP address and TCP/UDP port
  • Source IP address and VLAN
  • Source IP address, TCP/UDP port and VLAN
  • Source MAC address
  • Source TCP/UDP port
  • Source and destination IP address
  • Source and destination IP address and TCP/UDP port
  • Source and destination IP address and VLAN
  • Source and destination IP address, TCP/UDP port and VLAN
  • Source and destination MAC address
  • Source and destination TCP/UDP port
  • Source port ID
  • VLAN

Note: These policies are configured for LAG. The LAG load balancing policies always override any individual Distributed Port group if it uses the LAG. 

LACP limitations on a vSphere Distributed Switch

  • LACP does not support Port mirroring or iSCSI software multipathing.
  • LACP settings do not exist in host profiles.
  • LACP between two nested ESXi hosts is not possible.
  • You can create up to 64 LAGs on a distributed switch. A host can support up to 64 LAGs.
    • Note: the number of LAGs that you can actually use depends on the capabilities of the underlying physical environment and the topology of the virtual network. For example, if the physical switch supports up to four ports in an LACP port channel, you can connect up to four physical NICs per host to a LAG.
  • LACP is currently unsupported with SR-IOV.

LACP Compatibility with vDS

Basic LACP (LACPv1) is only supported on vSphere versions 6.5 or below. Upgrading ESXi to 7.0 may result in the physical switch disabling the LAG ports on ESXi hosts using Basic LACP.

For more information on Enhanced LACP in vSphere, see --> Converting to Enhanced LACP Support on a vSphere Distributed Switch- "Source vCenter Server has instance(s) of Distributed Virtual Switch at unsupported lacpApiVersion"

Additional Information

Impact/Risks:


Minimal chance of network interruption, provided that the vCenter server has a connection to the hosts that is independent of the DVS.  This is especially true for enabling LACP on a vSphere Distributed Switch (vDS) because the Distributed Switch is owned by the vCenter Server and the hosts alone cannot make changes to the vDS if connection to vCenter is lost.

VMware recommends to perform these actions during a maintenance window. 

Enabling LACP can greatly complicate vCenter or host management recovery in production down scenarios, because the LACP connection may need to be broken to move back to a Standard Switch if necessary (since LACP is not supported on a Standard Switch).

Also, for Support Case situations, when LACP is configured, any network troubleshooting will typically require not only support from VMware by Broadcom, but also collaboration on the same call with the customer personnel who manage and can access and modify physical switch configurations. Because both these groups are typically in high demand, this tends to elongate the length of time for support calls considerably. 

Since a distributed switch is required, the ephemeral port groups are generally used for recovery purposes when there is a need to provision ports directly on a host, bypassing vCenter Server. Ephemeral port groups are recommended for the vCenter Server VM and the host management vmkernels. For more information, see --> Static (non-ephemeral) or ephemeral port binding on a vSphere Distributed Switch.

 

Note: For more information, see the vSphere Networking section --> About vSphere Networking. This guide contains definitive information. If there is a discrepancy between the guide and the article, assume the guide is correct.

For more information on LACP support on a vSphere Distributed Switch, see: