Can not attach Load Balancer service to a Logical Router and the LB object it is in a failed state in policy view. It is reported that the associated Edge nodes have no available capacity.
search cancel

Can not attach Load Balancer service to a Logical Router and the LB object it is in a failed state in policy view. It is reported that the associated Edge nodes have no available capacity.

book

Article ID: 384387

calendar_today

Updated On:

Products

VMware NSX

Issue/Introduction

  • Load Balancer is seen in a failed state in the policy view in the NSX Manager UI.
  • There is an attempt to increase the size of a Load Balancer.
  • The following error is reported on the Load Balancer being resized:
Can not attach LARGE load balancer service to logical router <Edge UUID> path=[/infra/tier-1s/<SR UUID>] for the associated edge nodes have no available capacity.
  • The size of the Load Balancer is reported as its current size in the Manager view in the NSX Manager UI, and as the increased size in the Policy view. For example if the LB size is being increased from a medium to a large, then it would appear as a medium LB in Manager view and a large size in the Policy view.
  • In the NSX Manager /var/log/syslog a message like the following is logged:
2024-12-05T10:15:02.044Z <NSX Manager> NSX 70688 POLICY [nsx@6876 comp="nsx-manager" errorCode="PM0" level="ERROR" subcomp="manager"] Created alarm Alarm [policyPath=/infra/realized-state/enforcement-points/default/lb-services/<LB UUID>/alarms/<ALARM UUID>, message=[error_code=23735, module_name=LOAD-BALANCER, error_message='Can not attach LARGE load balancer service to logical router <Logical Router UUID> path=[/infra/tier-1s/<UUID>] for the associated edge nodes have no available capacity.'],errorId=PROVIDER_INVOCATION_FAILURE, path=null, apiError=error_code=23500, module_name=LOAD-BALANCER, error_message='Found errors in the request. Please refer to the related errors for details.'#012  related_errors=[#012  error_code=23735, module_name=LOAD-BALANCER, error_message='Can not attach LARGE load balancer service to logical router <Logical Router UUID> for the associated edge nodes have no available capacity.'#012  ], sourceSiteId=null].
  • It maybe possible that the output from the following Edge capacity API's indicate that there should be enough available capacity\credits to allow for the deployment of additional Load Balancers or the increase in size:
/api/v1/loadbalancer/usage-per-node/{edge-node-id}

/policy/api/v1/infra/lb-node-usage?node_path=/infra/sites/<site-id>/enforcement-points/<enforcement-point-id>/edge-clusters/<edge-cluster-id>/edge-nodes/0

/policy/api/v1/infra/lb-node-usage-summary?include_usages=true
  • It maybe possible that the configuration maximums guide indicates that there should be enough available capacity\credits to allow for the deployment of additional Load Balancers.

Environment

VMware NSX-T Data Center

Cause

  • The Policy intent of increasing the size of the Load Balancer Service, is failing to be realized due to insufficient available capacity on the Edge nodes. 
  • Available capacity on the Edge may not be available as it is reserved on Tier-1 Gateways. A reservation maybe set on a Tier-1 Gateway by setting the 'Edges Pool Allocation Size' from its default of 'Routing' value to either LB Small/LB Medium/LB large. Please refer to the NSX Technical Documents for information on this setting.
  • In NSX Manager /var/log/syslog a message like the following is logged which indicates the failure of the policy realization workflow for the LB object:
2024-12-05T00:33:50.248Z ERROR providerTaskExecutor-19 LbsLrAllocationUtils 70688 LOAD-BALANCER [nsx@6876 comp="nsx-manager" errorCode="MP23735" level="ERROR" subcomp="manager"] Can not attach LARGE load balancer service to logical router <Logical Router UUID> for the associated edge nodes have no available capacity. 

2024-12-05T00:33:50.248Z ERROR providerTaskExecutor-19 BaseLbNsxTProxyHandler 70688 POLICY [nsx@6876 comp="nsx-manager" errorCode="PM500016" level="ERROR" subcomp="manager"] intentRealizationWorkflow failed with throwable null
com.vmware.nsx.management.service.loadbalancer.common.exceptions.LoadBalancerException: null
        at com.vmware.nsx.management.service.loadbalancer.lbs.common.LbsLrAllocationUtils.throwAllocationException(LbsLrAllocationUtils.java:73) ~[?:?]
        at com.vmware.nsx.management.service.loadbalancer.lbs.common.LbsLrAllocationUtils.internalBindLbsToLr(LbsLrAllocationUtils.java:94) ~[?:?]
        at com.vmware.nsx.management.service.loadbalancer.lbs.common.LbsLrAllocationUtils.bindLbsToLr(LbsLrAllocationUtils.java:84) ~[?:?]
        at com.vmware.nsx.management.service.loadbalancer.lbs.common.LbsLrAllocationUtils.updateLbAllocation(LbsLrAllocationUtils.java:46) ~[?:?]
        at com.vmware.nsx.management.service.loadbalancer.lbs.service.LbsServiceImpl.updateWithCacheWithDao(LbsServiceImpl.java:257) ~[?:?]
        at com.vmware.nsx.management.service.loadbalancer.lbs.service.LbsServiceImpl.updateWithCache(LbsServiceImpl.java:236) ~[?:?]
        at com.vmware.nsx.management.service.loadbalancer.lbs.service.LbsServiceImpl.update(LbsServiceImpl.java:222) ~[?:?]
        at com.vmware.nsx.management.policy.providers.loadbalancer.nsxt.handler.LbServiceNsxTHandler.executeUpdate(LbServiceNsxTHandler.java:95) ~[?:?]
        at com.vmware.nsx.management.policy.providers.loadbalancer.nsxt.handler.LbServiceNsxTHandler.executeUpdate(LbServiceNsxTHandler.java:1) ~[?:?]
        at com.vmware.nsx.management.policy.providers.loadbalancer.nsxt.handler.BaseLbNsxTProxyHandler.handleCreateOrUpdate(BaseLbNsxTProxyHandler.java:132) ~[?:?]
        at com.vmware.nsx.management.policy.providers.loadbalancer.nsxt.handler.LbServiceNsxTHandler.handleCreateOrUpdate_aroundBody0(LbServiceNsxTHandler

 

Resolution

  • Consult the NSX configuration maximums and the GET capacity usage per node API, as per the KB322486, before increasing the size of a Load Balancer. This API will provide the capacity values from the perspective of the Load Balancer.
  • In addition, consult the following Logical Router API to determine whether capacity is being reserved by a Tier-1 Gateway. Please note that there maybe a difference in the remaining credit values reported from the API below when compared to the Load Balancer API recommended in KB KB322486. Also, note that the Load Balancer API reports the credit numbers in values of tens, while the Logical Router API reports the credit numbers in values of hundreds.
GET https://<NSX-Manager-IP-Address>/api/v1/edge-clusters/<EDGE-CLUSTER-UUID>/allocation-status

{
"id": "<EDGE-CLUSTER-UUID>",
"display_name": "<DISPLAY-NAME>",
"member_count": 2,
"members": [
{
"member_index": 0,
"node_id": "<EDGE-NODE-ID>",
"node_display_name": "<EDGE-DISPLAY-NAME>",
"allocation_pools": [
{
"active_service_count": 22,
"standby_service_count": 0,
"sub_pools": [
{
"sub_pool_type": "LoadBalancerAllocationPool",
"usage_percentage": 87.5,
"remaining_credit_number": 100
}
]
}
],
"allocated_services": [
{
"service_reference": {
"target_id": "<TIER1-GW-UUID>",
"target_display_name": "<TIER1-GW-DISPLAY-NAME>",
"target_type": "LogicalRouter"
},
"high_availability_status": "ACTIVE",
"allocation_details": [
{
"key": "sub_pool_type",
"value": "LoadBalancerAllocationPool"
},
{
"key": "sub_pool_size",
"value": "SMALL"
}
]
},
  • In the example output above from the Logical Router API it can be seen that the remaining credits on the Edge Cluster are 100, which would equate to a value of 10 on the Load Balancer API. In addition, it is shown that a reservation\allocation pool of size (key": "sub_pool_size") "SMALL" is set on the Tier-1 Gateway. 
  • Each Load Balancer size or pool allocation size uses the credit values listed below, so either 10 * small LB's or 1 * 100 Medium LB could be deployed on the Edge cluster if there are 100 remaining credits:
Sizes: SMALL= 10 credits, MEDIUM 100, LARGE 400, XLARGE 800
  • If there are reservations set on Tier-1 routers, then it is recommended to use the Logical Router API to discover the true remaining capacity, since it accounts for reservations on the Tier-1 Gateways. 
  • To resolve the issue edit the configuration of the Load Balancer within the policy view in the Manager UI back to its original size, so it is in sync with the Manager view.
  • If the requirement is for the resize of the LB to complete successfully, then the following actions could be considered to resolve the issue:
    1. If Edge reservations\pool allocations are configured, but not required on Tier-1 gateways, then set them back to the default value of routing in order to free up credits.
    2. Remove some of the existing LB's if they are no longer needed to free up capacity.
    3. Move the LB you are trying to increase in size to a new Edge cluster.