Steps to resolve
For 3.1.2 and higher
Recommendation Action:
For a centralized load balancer:
- Check the load balancer status on the standby Edge node, as the degraded status means the load balancer status on the standby Edge node is not ready. On the standby Edge node, invoke the NSX CLI command `
get load-balancer <lb-uuid> status`.
- If the LB-State of load balancer service is not_ready or there is no output, make the Edge node enter maintenance mode, then exit maintenance mode.
For a distributed load balancer:
- Prerequisite Validation: "Enable DFW for DLB workloads. Disabling DFW either globally or through the DFW Exclusion List will cause an outage on DLB workloads. Reference Documentation: Distributed Load Balancer"
- Get detailed status by invoking NSX API
GET /policy/api/v1/infra/lb-services/<LBService>/detailed-status?source=realtime
- From API output, find ESXi host reporting a non-zero instance_number with status NOT_READY or CONFLICT or TN with zero value for all three status.
- On ESXi host node, invoke the NSX CLI command `
get load-balancer <lb-uuid> status`.
If 'Conflict LSP' is reported, check whether this LSP is attached to other load balancer service. Check whether this conflict is acceptable.
If 'Not Ready LSP' is reported, check the status of this LSP by invoking NSX CLI command `get logical-switch-port status`.
- If API output shows host with 0 values for NOT_READY, CONFLICT and READY states, this transport node may be faulty and should be reviewed.
NOTE: You should ignore the alarm if it can be resolved automatically in 5 minutes because the degraded status can be transient.
Maintenance window required for remediation? Yes for the centralized load balancer.