Configuring LACP on an Uplink Port Group
Note: All port groups using the LAG Uplink Port Group enabled with LACP must have the load balancing policy set to Route Based on Originating Virtual Port, network failure detection policy set to link status only, and all uplinks set to active.
General Procedure:
- Create and configure a Link Aggregation Group (LAG).
- Connect the physical adapters to LAG that just got created.
- Configure teaming and failover specifically for LACP.
- Activate and Migrate LAG to desired host(s).
Detailed procedure:
- Open the vSphere Web client.
- Click the Networking view.
- Expand the Datacenter and select the Distributed Switch.
- In the LACP section, click the +NEW symbol to create a new LAG.
- Give the new LAG a name.
- Select the number of ports per host desired. Later you will connect a physical network adapter from each host to each port.
- Choose the LACP mode you want to use from the dropdown. Choose Active or Passive. In both scenarios, the physical switch should also be set to Active.
- Select a Load balancing algorithm. The chosen algorithm should follow suit with the physical switch. Check with the physical switch vendor for the best match.
- Click OK.
- Reassign network adapters from the available uplink adapters to the new LAG
- Move the new link aggregation group to Standby state for the distributed portgroups where you want to use LACP.
- Use the add and manage hosts wizard in template mode to migrate physical uplinks to the LAG on multiple hosts simultaneously: reassign uplinks on template host, then apply configuration to desired hosts.
- Set the LAG to the active state for the desired portgroups by moving it to the active class, then moving individual unassociated uplinks to the unused class (leaving the standby class empty).
Note: If the network adapters are currently used as uplinks for vmkernel port groups such as the management vmkernel, the network adapter may need to be "walked" over one adapter at a time to avoid losing connection to the management vmkernel. See the Workaround section below for more information.
Workaround:
Workaround: When LACP cannot be configured on host uplinks on the vSphere Distributed Switch
If the network adapters are currently used as uplinks for vmkernel port groups such as the management vmkernel, the network adapter may need to be "walked" over one adapter at a time to avoid losing connection to the management vmkernel.
Note: If there are other vmnics that can be used in the LAG, that would be easiest and fastest. If the current adapters must be moved to LAG, be aware that this process will take time and will require configuration at the same time on the physical switch ports. VMware highly recommends that the host be in Maintenance Mode during this process as well.
Here are the basic steps for walking currently used adapters over to the LAG adapter group.
Note: This is a sample. The actual adapters and vmnic numbers may be different for each environment:
- Identify the adapters (vmnics) that need to be moved to the LAG adapter group. For this example, we will use vmnic0 and vmnic1.
- Ensure that all the adapters (vmnics) are tagged with the correct VLAN, if used.
- On the physical switch, down the port connected to vmnic0.
- On the physical switch, place the port connected to vmnic0 in the LACP/Etherchannel configuration.
- In the Add and Manage Hosts wizard, select the host and add vmnic0 in one of the LAG uplinks by selecting vmnic0 and clicking "Assign Adapter". Finish the wizard.
- On the physical switch, up the port connected to vmnic0.
- In the port groups on the Distributed Switch, move the LAG uplink group to active and move the regular uplinks to unused. Ensure the host stays connected during these changes.
- On the physical switch, down the port connected to vmnic1.
- Add the vmnic1 to the LAG uplink group on the vDS through the Add and Manage Hosts wizard. Also add the switch port connected to vmnic1 to the LACP/Etherchannel in the physical switch.
- On the physical switch, up the vmnic1 port again.
Introduction to LACP in VMware
VMware supports Link Aggregation Control Protocol (LACP) on vSphere Distributed Switch (VDS) only.
Note: Link Aggregation Control Protocol (LACP) can only be configured through the vSphere Web Client.
- LACP is a standards-based method to control the bundling of several physical network links together to form a logical channel for increased bandwidth and redundancy purposes. LACP enables a network device to negotiate an automatic bundling of links by sending LACP packets to the peer.
- LACP works by sending frames down all links that have the protocol enabled. If it finds a device on the other end of the link that also has LACP enabled, it also sends frames independently along the same links, enabling the two units to detect multiple links between themselves and then combine them into a single logical link.
- Enhanced LACP support allows you to connect ESXi hosts to physical switches that use dynamic link aggregation.
- Multiple Link Aggregation Groups (LAGs) can now be created on a single Distributed Switch to aggregate the available bandwidth of physical NICs connecting to LACP port channels.
- When you create a LAG on a vSphere Distributed Switch (vDS), up to 24 LAG ports can be associated to it.
- These LAG ports are similar in concept to dvUplink ports, which are essentially slots that are associated with actual physical uplinks on each of the ESXi hosts.
- For example, in a two port LAG, the LAG ports have names similar to Lag1-0 and Lag1-1. Each ESXi host will have vmnics configured for each of these LAG ports.
- Each LAG can be configured for either Active or Passive LACP modes.
- In an Active LACP LAG, all ports are in an active negotiating state. The LAG ports initiate negotiations with the LACP port channel on the physical switch by sending LACP packets.
- In Passive mode, the ports respond to LACPDUs they receive but do not initiate the LACP negotiation.
- If the physical switch is configured for active negotiating mode, it can be left in Passive Mode.
- If LACP needs to be enabled and Active mode is unavailable, it is highly possible that the Physical Switch is not configured for LACP.
vSphere LACP supports these load balancing types:
- Destination IP address
- Destination IP address and TCP/UDP port
- Destination IP address and VLAN
- Destination IP address, TCP/UDP port and VLAN
- Destination MAC address
- Destination TCP/UDP port
- Source IP address
- Source IP address and TCP/UDP port
- Source IP address and VLAN
- Source IP address, TCP/UDP port and VLAN
- Source MAC address
- Source TCP/UDP port
- Source and destination IP address
- Source and destination IP address and TCP/UDP port
- Source and destination IP address and VLAN
- Source and destination IP address, TCP/UDP port and VLAN
- Source and destination MAC address
- Source and destination TCP/UDP port
- Source port ID
- VLAN
Note: These policies are configured for LAG. The LAG load balancing policies always override any individual Distributed Port group if it uses the LAG.
LACP limitations on a vSphere Distributed Switch
- LACP does not support Port mirroring or iSCSI software multipathing.
- LACP settings do not exist in host profiles.
- LACP between two nested ESXi hosts is not possible.
- You can create up to 64 LAGs on a distributed switch. A host can support up to 64 LAGs.
- Note: the number of LAGs that you can actually use depends on the capabilities of the underlying physical environment and the topology of the virtual network. For example, if the physical switch supports up to four ports in an LACP port channel, you can connect up to four physical NICs per host to a LAG.
- LACP is currently unsupported with SR-IOV.
LACP Compatibility with vDS
Basic LACP (LACPv1) is only supported on vSphere versions 6.5 or below. Upgrading ESXi to 7.0 may result in the physical switch disabling the LAG ports on ESXi hosts using Basic LACP.
For more information on Enhanced LACP in vSphere, see --> Converting to Enhanced LACP Support on a vSphere Distributed Switch- "Source vCenter Server has instance(s) of Distributed Virtual Switch at unsupported lacpApiVersion"