HCX Network Extension performance can be impacted when handling high volumes of traffic, multiple extended networks, or operating in high-density VLAN environments. Common symptoms include:
As of HCX 4.10, several configuration options are available to optimize Network Extension performance and address these issues.
Network Extension performance issues typically stem from
The following configuration options can optimize HCX Network Extension performance. These options can be implemented individually or in combination depending on the specific environment requirements.
If specific network segments are very large or generate disproportionately high volumes of traffic, you should consider deploying a dedicated Network Extension appliance to extend only that specific VLAN.
Moving problematic or large VLANs to their own appliances prevents resource contention with other extended networks. This isolation strategy should be the starting point for optimization and should be implemented alongside the other performance tuning options (GRO, APR, TCP MSS Clamping, and Scale Out) to ensure the most optimal situation.
Inconsistent Maximum Transmission Unit (MTU) settings across the network path can lead to fragmentation and severe performance degradation. Ensure that the underlay network MTU is consistent and accounts for HCX overhead (IPsec and Encryption).
For detailed configuration and calculation guidelines, refer to Configuring MTU for VMware HCX Components and Infrastructure.
To verify that you have a consistent MTU across the path, use the pmtu command on the HCX appliance:
SSH into the HCX Manager.
Enter the Central CLI: ccli
List the appliances: list
Connect to the specific appliance: go <id>
Run the PMTU discovery command: pmtu
Generic Receive Offload (GRO) improves inbound traffic performance by reassembling incoming packets into larger ones before delivery to workload applications.
To enable GRO:
This feature is particularly beneficial for applications with high inbound traffic patterns.
Application Path Resiliency creates multiple transport tunnels (up to eight) between each Interconnect and Network Extension appliance, improving reliability across WAN connections.
To enable APR:
Note: When enabling APR, ensure firewall settings on both sides allow for connectivity using UDP source ports in the 4500-4628 range and target UDP port 4500.
TCP MSS Clamping dynamically manages the TCP segment size to optimize transport performance for Network Extension service traffic.
To enable TCP MSS Clamping:
The default network adapter context setting (ctxPerDev=1) on Network Extension appliances limits the number of CPU threads that can simultaneously process network traffic. In high-density environments, increasing this value improves performance.
For detailed instructions on modifying the ctxPerDev setting, refer to How to Troubleshoot and Fix Packet Loss Related to High CPU on HCX Network Extensions.
For high-traffic environments, deploying multiple Network Extension appliances per switch or Transport Zone distributes the load and improves performance.
To configure Scale Out:
For more information on creating and configuring a Service Mesh with these options, refer to Create a Service Mesh for vSphere-based Site Pairs.
If performance issues persist after implementing these configuration changes, contact Broadcom Support for further assistance.
Please provide the following information when opening a support request with Broadcom for Network Extension performance issues and reference this article.
perftest all results using this article - Steps to Run Perftest in HCX