The Virtual Service could not be placed because of an error indicating insufficient available vNIC slots: PLACEMENT_INELIGIBLE_INSUFFICIENT_VNIC_SLOTS
search cancel

The Virtual Service could not be placed because of an error indicating insufficient available vNIC slots: PLACEMENT_INELIGIBLE_INSUFFICIENT_VNIC_SLOTS

book

Article ID: 394100

calendar_today

Updated On:

Products

VMware Avi Load Balancer

Issue/Introduction

  • The placement of the Virtual Service fails with the error indicating that the Service Engine does not have access to the VIP network.

 

  •  In the /var/lib/avi/log/resmgr.INFO log, we can observe that after the Virtual Services are removed from the Service Engines, the VNIC cleanup process is being triggered for the SEs.
I20250302 23:26:07.413411 11707 se_resource.cc:1571] T[53f35f40] F[FindVnicsRead\{*}yForDelete] [overlay_cloud_******-3-se-****:se-****] found unused vnic mac[**:**:**:**:**:**] ready for deletion{}{*}

I20250302 23:26:07.413419 11707 se_resource.cc:1571] T[53f35f40] F[FindVnicsRead\{*}yForDelete] [overlay_cloud_***********-se-*****-****] found unused vnic mac[**:**:**:**:**:**] ready for deletion{}{*}

I20250302 23:26:07.419900 11707 event_api.cc:284] F[GenerateEvent] L[284]Dumping the Log Message...... eventlogs {{}

  report_timestamp: 7477357531976065289                                         

  obj_type: SERVICEENGINE                                                       

  event_id: DEL_NW_SE                                                           

  module: RESMGR  

 

  •  However, those VNICs are not being removed from the SeResource object even after receiving a success notification from the Cloud Connector. As seen in the resmgr.INFO logs, VNICs marked for deletion remain on the SE. Over time, these stale VNICs accumulate, eventually leaving only a single active VNIC available on the SE. 
    I20250303 06:30:40.695624 11707 rm_svc_impl.cc:575] T[] F[UpdSeVniReach] [overla\{*}y_cloud_*******-se-*****:se-****] [rpc in] [0050569cc5c4-res_mgr:0<--0050569cc5c4-se_mgr:13]{}{*}
    
    I20250303 06:30:40.695749 11707 rm.cc:6226] T[] F[UpdSeVniReach] [overlay_cloud_\{*}T_3_a_s-1_arm_SE_GRP_T-3-se-****:se-****] * 
    
      MAC[**:**:**:**:**:**] NW[seg93:network-*******] Connected[0] Enabled[1] Internal[1] VRF[global:vrfcontext-afb2de] DP Del[1] Name[eth4] Del[1]{}
    
      MAC[**:**:**:**:**:**] NW[seg93:network-******] Connected[0] Enabled[1] Internal[1] VRF[global:vrfcontext-afb2de] DP Del[1] Name[eth3] Del[1]{}
    
      

     

Environment

NSX-T CLOUD

Cause

This issue is due to a bug introduced with the nsxt_no_hotplug feature. During a reboot or warm restart of the Controller, the no_hotplug flag is not properly propagated to the Resource Manager. As a result, the Resource Manager incorrectly assumes that no_hotplug is set to false, while the rest of the system continues to operate with the correct value of no_hotplug as true.

Resolution

Temporary Workaround:

Create a new Service Engine and place the Virtual Service on it.

 

Permanent Fix:

Perform the upgrade to the following versions where the fix has been applied:

30.2.3-2p1

30.2.2-2p5

31.2.1

31.1.2

31.1.1-2p2

30.2.4