HCX Network Extension Performance Tuning
search cancel

HCX Network Extension Performance Tuning

book

Article ID: 389281

calendar_today

Updated On:

Products

VMware HCX

Issue/Introduction

HCX Network Extension performance can be impacted when handling high volumes of traffic, multiple extended networks, or operating in high-density VLAN environments. Common symptoms include:

  • Packet loss on extended networks
  • Latency and occasional timeouts in network responses
  • Application connectivity issues between on-premises and cloud environments
  • Performance degradation during peak usage periods

As of HCX 4.10, several configuration options are available to optimize Network Extension performance and address these issues.

Environment

  • VMware HCX 4.10 or later
  • Network Extension appliances
  • High-density VLAN environments

Cause

Network Extension performance issues typically stem from

  1. Limited processing resources for network traffic
  2. Suboptimal network configurations
  3. High volumes of cross-site traffic
  4. Inefficient packet handling across extended networks

Resolution

The following configuration options can optimize HCX Network Extension performance. These options can be implemented individually or in combination depending on the specific environment requirements.

1. Enable Generic Receive Offload (GRO)

Generic Receive Offload (GRO) improves inbound traffic performance by reassembling incoming packets into larger ones before delivery to workload applications.

To enable GRO:

  1. Navigate to Interconnect > Service Mesh
  2. Edit an existing Service Mesh or create a new one
  3. In the Traffic Engineering section, check Generic Receive Offload

This feature is particularly beneficial for applications with high inbound traffic patterns.

2. Enable Application Path Resiliency (APR)

Application Path Resiliency creates multiple transport tunnels (up to eight) between each Interconnect and Network Extension appliance, improving reliability across WAN connections.

To enable APR:

  1. Navigate to Interconnect > Service Mesh
  2. Edit an existing Service Mesh or create a new one
  3. In the Traffic Engineering section, check Application Path Resiliency

Note: When enabling APR, ensure firewall settings on both sides allow for connectivity using UDP source ports in the 4500-4628 range and target UDP port 4500.

3. Enable TCP MSS Clamping

TCP MSS Clamping dynamically manages the TCP segment size to optimize transport performance for Network Extension service traffic.

To enable TCP MSS Clamping:

  1. Navigate to Interconnect > Service Mesh
  2. Edit an existing Service Mesh or create a new one
  3. In the Traffic Engineering section, check TCP MSS Clamping

4. Optimize CPU Thread Allocation

The default network adapter context setting (ctxPerDev=1) on Network Extension appliances limits the number of CPU threads that can simultaneously process network traffic. In high-density environments, increasing this value improves performance.

For detailed instructions on modifying the ctxPerDev setting, refer to How to Troubleshoot and Fix Packet Loss Related to High CPU on HCX Network Extensions.

5. Configure Network Extension Appliance Scale Out

For high-traffic environments, deploying multiple Network Extension appliances per switch or Transport Zone distributes the load and improves performance.

To configure Scale Out:

  1. When creating or editing a Service Mesh
  2. Navigate to the Advanced Configuration - Network Extension Appliance Scale Out section
  3. Increase the number of N

For more information on creating and configuring a Service Mesh with these options, refer to Create a Service Mesh for vSphere-based Site Pairs.

If performance issues persist after implementing these configuration changes, contact Broadcom Support for further assistance.

Please provide the following information when opening a support request with Broadcom for Network Extension performance issues and reference this article.

  • Network Extension details (extended networks, traffic patterns)
  • Current configuration (Service Mesh settings, scale-out settings)
  • Source and Target HCX log bundles with HCX database dumps and NE appliance logs selected
  • Source and Target ESXi log bundles with each of the active network extensions
  • esxtop screenshots from the ESXi host showing the active NE
  • perftest all results using this article - Steps to Run Perftest in HCX

Additional Information

Create Service Mesh Configuration Example