When trying to create a snapshot of a VM residing on a vSAN datastore, it fails with the error:
There are currently 1 usable fault domains. The operation requires 2 more usable fault domains. An error occurred while taking a snapshot: Out of resources.
Or
No disks of required version is present in the cluster for this operation to succeed. An error occurred while taking a snapshot: Out of resources.
025-04-15T14:47:00.776Z No(29) clomd[2098465]: [Originator@6876 opID=1804296191] CLOMLogConfigurationPolicy: Object size 322122547200 bytes with policy: (("stripeWidth" i1) ("cacheReservation" i0)("proportionalCapacity" (i0 i100)) (
"hostFailuresToTolerate" i1) ("forceProvisioning" i0) ("spbmProfileId" "aa6d5a82-1c88-45da-85d3-############") ("spbmProfileGenerationNumber" l+0) ("objectVersion" i20) ("CSN" l4584) ("SCSN" l4581) ("spbmProfileName" "vSAN Default Storage Policy"))
2025-04-15T14:47:00.776Z No(29) clomd[2098465]: [Originator@6876 opID=1804296191] CLOMGetMinMaxObjVersion: minVersion: 20 maxVersion: 20
2025-04-15T14:47:00.777Z Cr(26) clomd[2098465]: [Originator@6876 opID=1804296191] CLOM_CheckClusterResourcesForPolicy: Not enough Upper FD's available. Available: 2, needed: 3
2025-04-15T14:47:00.777Z Cr(26) clomd[2098465]: [Originator@6876 opID=1804296191] CLOM_GenerateObjectConfig: Cluster doesn't have resources for the current iteration: objVersion: 20 replicas: 1, stripes: 1
2025-04-15T14:47:00.777Z Cr(26) clomd[2098465]: [Originator@6876 opID=1804296191] CLOMGenerateNewConfig: Failed to generate a configuration: Not found
2025-04-15T14:47:00.777Z Cr(26) clomd[2098465]: [Originator@6876 opID=1804296191] CLOM_Diagnose: No disks of required version is present in the cluster for this operation to succeed.
2025-04-15T14:47:00.777Z Cr(26) clomd[2098465]: [Originator@6876 opID=1804296191] CLOMProcessWorkItem: Failed to generate configuration: Underlying device has no free space
2025-04-15T14:47:00.777Z No(29) clomd[2098465]: [Originator@6876 opID=1804296191] CLOMProcessWorkItem: Op ends:1804296191
Resolution
To resolve this, run the below command and make sure there are no hosts in Decom State.
echo "hostname,decomState,decomJobType";for host in $(cmmds-tool find -t HOSTNAME -f json |grep -B2 Healthy|grep uuid|awk -F \" '{print $4}');do hostName=$(cmmds-tool find -t HOSTNAME -f json -u $host|grep content|awk -F \" '{print $6}');decomInfo=$(cmmds-tool find -t NODE_DECOM_STATE -f json -u $host |grep content|awk '{print $3 $5}'|sed 's/,$//');echo "$hostName,$decomInfo";done|sort
Sample output:
hostname,decomState,decomJobType
esxi-1.example.com,0,0
esxi-2.example.com,0,0
esxi-3.example.com,0,0
Anything other than 0 means there is a host in vSAN Decom State.
hostname,decomState,decomJobType
esxi-1.example.com,0,0
esxi-2.example.com,0,0
esxi-3.example.com,6,0 <---
If you find a host in Decom State, place the host into maintenance mode ('No Action' option), and then remove the host from maintenance mode using the host UI or vCenter UI to clear this state. See KB vSAN Host Maintenance Mode is not in sync with vSAN Node Decommission State
Workaround
See also: VMware vSAN Design Guide pg9 Designing for Capacity Maintenance and Availability