Configuring an External Load Balancer for the NSX Management Cluster
search cancel

Configuring an External Load Balancer for the NSX Management Cluster

book

Article ID: 438038

calendar_today

Updated On:

Products

VMware NSX

Issue/Introduction

  • Clients interacting with the NSX API (automation tools, third-party monitoring) experience 429 Too Many Requests errors, high latency, or connectivity drops due to single node targeting for API processing.
  • Official guidance is the rate limiting should not be changed and an LB provisioned to spread the requests across multiple nodes as the rate limit is per node not per cluster.

Environment

VMware NSX

Cause

When accessing the NSX Manager directly or via an NSX Cluster HA VIP, all traffic is routed to a single "active" node. High-volume API calls can quickly overload this single manager and trigger rate limits, even if other nodes in the cluster are idle.

Resolution

The API traffic can be load balanced by configuring a load balancer in front of the Management Cluster, this can be achieved using NSX standard LB, VMware AVI LB or a third party solution. The below is an example configuration and is dependant upon your exact use case / LB deployment and environment.

  1. Frontend (Virtual Server)
    • Virtual IP (VIP): Client-routable IP address.
    • Port / Protocol: 443 / Layer 4 (TCP) (Recommended for simple SSL pass-through).
    • Idle Timeout: 300 seconds (Prevents dropping long-running tasks)
  2. Backend (Server Pool)
    • Members: Individual IPs of all NSX Manager nodes (Target Port 443).
    • Algorithm: Round Robin or Least Connections.
    • Session Persistence: None. REST APIs are stateless. Disabling persistence prevents high-volume automation from pinning all its requests to a single node, ensuring proper load distribution.
  3. Health Monitor (Active)
    Configure an active HTTPS health check. A simple TCP ping is insufficient as the proxy may be up while API services are down.
    • Port: HTTPS / 443
    • Interval / Timeout: 5 / 3 seconds
    • Rise / Fall Count: 3 / 3
    • HTTP Method: GET
    • Request URL: /api/v1/reverse-proxy/node/health
    • Expected Response: 200 OK
    • Required HTTP Headers:
      • Content-Type: application/json
      • Accept: application/json
      • Authorization: Basic <base64-encoded-credentials>
      • (Note: You must encode the string username:password. For example, admin:VMware1! encodes to YWRtaW46Vk13YXJlMSE= )

Verification

Test API connectivity through the new VIP using a standard cluster status API call such as the below

curl -k -u admin:<password> -X GET "https://<YOUR_NEW_LB_VIP>/api/v1/cluster/status"

Upon successful response, update your external automation tools and API clients to point to the new Load Balancer VIP.

Additional Information

Example configuration documented for Tanzu use case