During an NSX installation or upgrade using vSphere Lifecycle Manager (vLCM), one or more hosts experience a Purple Screen of Death (PSOD), after remediating hosts using KB PSOD on the ESXi host during host preparation for NSX-T with "tcp_syncache.hashbase == NULL" by rebooting hosts and the following errors are displayed in the NSX Manager UI under System → Fabric → Hosts:
Install Failed example:
Failed to install software on host. Solution Apply failed on one or more hosts in cluster <cluster-name> unresponsive after initiating operation. Failing after "#" attempts to retrieve task result. Solution apply failed on host "host-name".
Install Skipped example:
Failed to install software on host. Solution Apply failed on one or more hosts in cluster <cluster-name> was not processed, the reason: 'Solution Apply failed on one or more hosts in cluster <cluster-name>'.
You confirm the NSX VIBs are present on the hosts running esxcli software vib list | grep nsx
nsx-adf 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-cfgagent 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-context-mux 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-cpp-libs 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-esx-datapath 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-exporter 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-host 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-ids 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-monitoring 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-mpa 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-nestdb 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-netopa 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-opsagent 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-platform-client 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-proto2-libs 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-proxy 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-python-logging 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-python-protobuf 2.6.1-19195979 VMware VMwareCertified 2025-09-27 host
nsx-python-utils 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-sfhc 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-shared-libs 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-snproxy 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsx-vdpi 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
nsxcli 4.2.1.2.0-8.0.24476730 VMware VMwareCertified 2025-09-27 host
Despite these error messages, all hosts may appear healthy and successfully remediated from the vCenter side.
The Resolve option in NSX UI does not resolve the hosts' Install Failed or Install Skipped status.
VMware NSX 4.2.1.2
These errors occur when a PSOD or host crash interrupts the NSX installation process initiated through NSX Manager while vLCM is managing the cluster image.
After the PSOD and host recovery, vCenter completes the host image remediation successfully, but NSX Manager may retain a stale or incomplete task state, resulting in “Install Failed” or “Install Skipped” statuses for the affected hosts.
If the remediation has already been completed successfully on the vCenter side, and you have selected Resolve on the cluster in NSX UI but it doesn't resolve the hosts Failed and Skipped status perform workaround below.
Workaround:
This operation will prompt NSX Manager to query vCenter for current host compliance and synchronize the state accordingly.
To safely perform this procedure:
Place one host into Maintenance Mode.
Migrate the host out of the cluster to the Datacenter level in vCenter.
In the NSX Manager UI, navigate to:
System → Fabric → Hosts → Other Nodes
If the host still shows Install Skipped or Install Failed, select the host and choose Remove NSX.
If removal fails or the host shows as Orphaned, reselect the host, enable Force Remove, and retry.
Wait until the host shows as Not Configured in the NSX UI.
Move the host back into the cluster (keep it in Maintenance Mode).
Monitor the NSX Manager UI for installation progress — the status should transition to SUCCESS.
After confirming success on one host, repeat this process for the remaining affected hosts in the cluster.