Configuring Edge TEP Groups for Improved Load-Sharing of Traffic Across Multiple Edge TEPs.
search cancel

Configuring Edge TEP Groups for Improved Load-Sharing of Traffic Across Multiple Edge TEPs.

book

Article ID: 394101

calendar_today

Updated On:

Products

VMware NSX

Issue/Introduction

  • With NSX 4.2.1, TEP High Availability (HA) for Edge Transport Nodes was introduced. In addition to the HA feature, the load-sharing behavior was also modified.

  • Before NSX 4.2.1, each segment was bound to a single TEP interface. This limitation meant that North/South traffic could only utilize the maximum throughput of one physical adapter (on the ESXi host where the Edge VM is realized). With TEP HA and the introduction of TEP Groups, this behavior has changed significantly.

Environment

VMware NSX

Cause

By correctly configuring Edge TEP Groups, both load-sharing and throughput can be efficiently improved.

Resolution

Please follow the below steps to configure Edge TEP HA and Edge TEP Groups:

  1. Enable Multi-TEP High Availability as detailed in the document - Multi-TEP High Availability.
  2. Enable Equal-Cost Multi-Path as detailed in the document - Equal-Cost Multi-Path in NSX.
  3. Using the PUT API to the required TNP /policy/api/v1/infra/host-transport-node-profiles/<tnp-id>, depending on the HostSwitch type (i.e. if using ENS, can be set to L4 or if using Standard, can be set to L3), set the ecmp_mode to L3/L4. Below is a sample payload:
    "host_switches": [
         {
           "host_switch_name": "vDS01",
           "host_switch_id": "50 ## ## ## ## ## ## ##-## ## ## ## ## ## ## e6",
           "host_switch_type": "VDS",
           "host_switch_mode": "STANDARD",
           "ecmp_mode": "L3",

Additional Information

The ecmp_mode value L4 provides finer granularity of flow distribution at the cost of expanding flow entries in the Flow Cache. Therefore, it should be used selectively on transport nodes where load balancing needs to be based on Layer 4 flows.