vSAN Cluster Shutdown fails to place hosts into maintenance mode with the following error:
Task failed on 10.##.###.12 for (vim.fault.VsanFault) { msg = 'General vSAN error.', faultMessage = (vmodl.LocalizableMessage) [ (vmodl.LocalizableMessage) { key = '', message = 'hostsInMM' }, (vmodl.LocalizableMessage) { key = '', message = "Failed to run cmd: ['/bin/localcli', '++group=host/vim/tmp', 'system', 'maintenanceMode', 'set', '-e', 'true', '-m', 'noAction'], out: , err: Errors: \nGeneral vSAN error.\n\n" } ] }
VMware vSAN (All Versions)
vSAN Shutdown Wizard failed to place hosts into maintenance mode due to a vLCM task running simultaneously on the cluster also trying to place hosts into maintenance mode.
From the host failing to enter maintenance mode via cluster shutdown:
/var/log/hostd.log
2025-12-11T17:08:52.026Z In(166) Hostd[2101384]: [Originator@6876 sub=Vimsvc.TaskManager opID=vim-cmd-e4-109c sid=52cd174e user=dcui] Task Created : haTask-ha-host-vim.HostSystem.enterMaintenanceMode-15801875
/var/log/vsansystem.log
2025-12-11T17:08:52.041Z In(166) vsansystem[2099653]: [vSAN@6876 sub=VsanSystemProvider opId=vim-cmd-e4-109c-3eff] newState: Decommissioned newMode: No Action
2025-12-11T17:08:52.061Z In(166) vsansystem[2099653]: [vSAN@6876 sub=VsanSystemProvider opId=vim-cmd-e4-109c-3eff] Complete. Status: Busy
From the host being placed into maintenance mode via vLCM:
/var/log/hostd.log
2025-12-11T17:08:41.382Z In(166) Hostd[2101270]: [Originator@6876 sub=Vimsvc.TaskManager opID=aa79b605-3b41-42eb-a394-############-6a-a-615a sid=5278eaa9 user=vpxuser:com.vmware.vcIntegrity] Task Created : haTask-ha-host-vim.HostSystem.enterMaintenanceMode-17025585
/var/log/vsansystem.log
2025-12-11T17:08:41.396Z In(166) vsansystem[2099610]: [vSAN@6876 sub=VsanSystemProvider opId=aa79b605-3b41-42eb-a394-############-6a-a-615a-31d4] Number of nodes in cluster: 3. Decommission mode: Ensure Accessibility
2025-12-11T17:19:01.263Z In(166) Hostd[2101270]: [Originator@6876 sub=Vimsvc.TaskManager opID=aa79b605-3b41-42eb-a394-############-6a-a-615a sid=5278eaa9 user=vpxuser:com.vmware.vcIntegrity] Task Completed : haTask-ha-host-vim.HostSystem.enterMaintenanceMode-17025585 Status success
From vCenter logs
/var/log/vmware/envoy/envoy-access-1.log
2025-12-11T17:08:41.380Z In(166) envoy-access[2100506]: POST /hgw/host-1181779/vpxa HTTP/1.1 200 via_upstream - 1075 470 0 0 0 10.101.7.135:32858 TLSv1.2 10.17.201.11:443 127.0.0.1:40867 - 127.0.0.1:8089 "aa79b605-3b41-42eb-a394-############-6a" "EnterMaintenanceModeVpxa_Task"
2025-12-11T05:54:50.456-05:00 info vpxd[2288783] [Originator@6876 sub=vpxLro opID=2c64a74b] [VpxLRO] -- BEGIN lro-1006904248 -- host-465660 -- vim.HostSystem.updateVmEvacuationActionOnHostUpgrade -- 52133e43-9e15-89f5-f21f-############(52793725-fa84-efb7-69c2-############)
2025-12-11T05:54:50.445-05:00 info vmware-vum-server[3255463] [Originator@6876 sub=EHP] Started: health check query for [host-465660], perspective: [BEFORE_ENTER_MAINTENANCE]^M2025-12-11T05:54:50.445-05:00 info vmware-vum-server[3255463] [Originator@6876 sub=EHP] Got all services. ^M2025-12-11T05:54:50.455-05:00 info vmware-vum-server[3255463] [Originator@6876 sub=EHP] [host-465660] Updating the VM evacuation action.^M2025-12-11T05:54:50.457-05:00 info vmware-vum-server[3255463] [Originator@6876 sub=EHP] [host-465660] Successfully updated evacuation action.^M2025-12-11T05:54:50.468-05:00 info vmware-vum-server[3255463] [Originator@6876 sub=EHP] Disabled checks: ^M2025-12-11T05:54:50.468-05:00 info vmware-vum-server[3255463] [Originator@6876 sub=EHP] Excluded checks: ^M2025-12-11T05:54:50.468-05:00 info vmware-vum-server[3255463] [Originator@6876 sub=EHP] Filtering health checks in provider [esx]...^M2025-12-11T05:54:50.468-05:00 info vmware-vum-server[3255463] [Originator@6876 sub=EHP] CheckContext: {entityMoId: "host-465660", vapiSession: "c3fe3e0f728a6d65842fa7cc21ba101b1cc8dedc", env: {"Host part of VMC": false, "vLCM-VMC integration, Pod service enabled": false, }}, HostCheckContext: {spec: {{ com.vmware.esx.health.hosts.check_spec : { evacuation_action : Optional< DO_NOT_CHANGE_VMS_POWER_STATE>, exclude_checks : [ ] , hosts : Optional< >, maintenance_mode_type : Optional< >, memory_reservation : Optional< >, perspective : BEFORE_ENTER_MAINTENANCE, target_spec : Optional< {{ com.vmware.esx.health.hosts.target_spec : { state_changes : [ {{ map-entry : { key : VMware-HBR-Agent, value : UPGRADE, } }} , ] , } }} >, upgrade_actions : Optional< >, vsan_streched_cluster : Optional< domain-c465668>, } }} }^M
Note: The preceding log excerpts are only examples. Date, time, and environmental variables may vary depending on your environment.
vSAN Cluster Shutdown Wizard doesn't support multiple hosts going into maintenance mode simultaneously regardless of the maintenance mode option chosen or the alternate method a host is placed into maintenance mode, ie. user initiated, vLCM, script, ect. If Cluster Shutdown detects another host in the cluster shutting down the cluster shutdown process will fail.
Before trying to shut down the vSAN cluster for planned maintenance ensure no other tasks are running on the cluster that requires placing the hosts into maintenance mode like ESXi upgrade, vSphere Replication/SRM (vLSR), ect.
If you've hit this issue ensure all other tasks that require placing a host in maintenance mode has completed and then click on "Resume Shutdown" to complete the cluster shutdown process.