An attempt to migrate (vMotion) a VM from one ESXi host to another fails with the following error message for NSXA down, also seen in the screenshot below
"Currently connected network interface uses network 'DVSwitch[…] NSX port group(nsxa down), which is not accessible"
Additionally, the ESXi hosts with NSXA down may appear with "Install Failed" status in the NSX UI under System -> Fabric -> Hosts.
VMware NSX
This is the result of a broken connection between the APH and the proton service in one or more NSX Manager appliances. The following loglines seen in the log file /var/log/vmware/appl-proxy-rpc.log indicate the presence of this issue:
2025-07-02T16:22:36.803Z nsxmgr## NSX 82410 - [nsx@#### comp="nsx-manager" subcomp="appl-proxy" s2comp="nsx-net" tid="####" level="WARNING"] StreamConnection[106###5135 Connecting to unix:///var/run/vmware/appl-proxy/aph.sock(pid:###### uid:113 gid:117) sid:##########] Couldn't connect to 'unix:///var/run/vmware/appl-proxy/aph.sock(pid:##### uid:113 gid:117)' (error: 2-No such file or directory)2025-07-02T16:22:36.803Z nsxmgr## NSX 82410 - [nsx@#### comp="nsx-manager" subcomp="appl-proxy" s2comp="nsx-net" tid="#####" level="WARNING"] StreamConnection[106###5135 Error to unix:///var/run/vmware/appl-proxy/aph.sock(pid:386085 uid:113 gid:117) sid:-1] Error 2-No such file or directory2025-07-02T16:22:36.803Z nsxmgr01 NSX 82410 - [nsx@6876 comp="nsx-manager" subcomp="appl-proxy" s2comp="nsx-rpc" tid="82448" level="WARNING"] RpcConnection[106###5135 Connecting to unix:///var/run/vmware/appl-proxy/aph.sock(pid:##### uid:113 gid:117) 0] Couldn't connect to unix:///var/run/vmware/appl-proxy/aph.sock(pid:###### uid:113 gid:117) (error: 2-No such file or directory)2025-07-02T16:22:36.803Z nsxmgr## NSX 82410 - [nsx@#### comp="nsx-manager" subcomp="appl-proxy" s2comp="nsx-rpc" tid="#####" level="WARNING"] RpcTransport[1] Unable to connect to unix:///var/run/vmware/appl-proxy/aph.sock(pid:##### uid:113 gid:117): 2-No such file or directory
The presence of following lines in the output of the net-dvs -l command on the affected ESXi host additionally confirms the presence of this issue:
com.vmware.common.opaqueDvs.status.component_list = nsxa,vswitch,lcp.ccpSession,lcp.liveness,lcp.kcpSyncStatus , propType = CONFIGcom.vmware.common.opaqueDvs.status.component.nsxa = down , propType = CONFIGcom.vmware.common.opaqueDvs.status.component_list = nsxa,vdl2,vswitch,lcp.ccpSession,lcp.liveness,lcp.vdl2SyncStatus,lcp.kcpSyncStatus , propType = CONFIGcom.vmware.common.opaqueDvs.status.component.nsxa = down , propType = CONFIG
Method 1: Using the Direct Console User Interface (DCUI)
This method is the most reliable and requires direct access to the host's console or a remote management tool like iDRAC, iLO, or KVM.
Connect to the physical console of your ESXi host.
Method 2: Using the ESXi Shell (SSH)
You can use an SSH client like PuTTY to connect to the host. You must enable SSH in the DCUI or host client first.
Use an SSH client to log in to the ESXi host with root credentials and run the following commands individually to restart the two main management agents:
/etc/init.d/vpxa restart
/etc/init.d/hostd restart
If you still experience issues, you can restart all management agents by running:services.sh restart
Caution: Do not use services.sh restart if you have VMware NSX, vSAN, or LACP configured on the host, as it can cause network disruptions.
Method 3: Using the VMware Host Client (Web UI)
Method 4: Reboot Host