"There is no more space for virtual disk .vmdk" error when starting vSAN VM
search cancel

"There is no more space for virtual disk .vmdk" error when starting vSAN VM


Article ID: 326433


Updated On:


VMware vSAN


The purpose of this article is to explain the cause of the no more space for virtual disk error and information to resolve the issue.

You are unable to start up a vSAN virtual machine and you experience these symptoms:
  • The vSAN datastore reports sufficient total free space to provision the virtual machine.
  • The Capacity Tier disks in use by the vSAN cluster ESXi hosts are greater than 255 GB in size.
  • You see a similar error in the vSphere Web Client:

    There is no more space for virtual disk <VM name>.vmdk. You might be able to continue this session by freeing disk space on the relevant volume and clicking retry.


VMware vSAN 8.0.x
VMware vSAN 6.x
VMware vSAN 6.5.x
VMware vSAN 6.2.x
VMware vSAN 6.6.x
VMware vSAN 7.0.x


The object creation fails due to non-compliance with the VM storage policy.


To resolve this issue, when applying a striping policy to objects, ensure that you have enough space to meet both the availability and striping requirements.

The reported total free space is raw capacity. To calculate the space required, you need to consider the storage policy that will be applied to the new objects.

In the vSAN default storage policy:
  • Failures To Tolerate (FTT) = 1
  • Stripes per object = 1
  • Object Space Reservation = 0
The reported raw free space usable is less than double the new object size plus some small metadata overhead. Attempts to create an object larger than this size, fails with above outlined error.
For example:
  • Raw free capacity = 3000 GB.
  • Required VM disk size = 250 GB.
  • Required raw free space = 500 GB + overhead.
However, because Object Space Reservation is 0 (thin provisioned), you may be able to over-commit the capacity. This will introduce a risk of running out of capacity tier space as the provisioned objects grow.
Ensure that using the default policy, an object that is less than or equal to 255 GB in size will have 2 components (replicas) placed on separate vSAN hosts. The components for this object are not split into smaller chunks because the Stripe Width is 1. However, if the smallest free space available on any capacity tier disks is less than object size, in this case, CLOMD will attempt one or more iterations to split replicas into smaller equal stripes that best fit on the free space of the capacity tier disks.

A factor to consider is how well the components are balanced on the capacity tier disks. If some disks are utilized more than others, the free space is not evenly distributed across capacity tier disks the cluster. This will result in creating smaller stripes to fit in the least common denominator and that some strips may reside on the same disk if the latter has more free space than other disks. Because the stripes are for best space allocation, not for performance optimization, it is acceptable for stripes to be co-located on the same capacity tier disk.
For example: Consider that you have a 3 node cluster with one disk group per node:
  • Each disk group has 4 capacity tier disks.
  • The free space on each disk is 50GB.
  • This makes the total free RAW capacity 600GB (4 disks per Disk Group x 50GB per disk x 3 nodes).
  • The best fit for a 250GB object would be less than 50GB per stripe (to account for metadata overhead).
The issue with this calculation is that 2 mirrors of a RAID-1 cannot reside on the same host. Therefore, if the first replica is spread over 2 nodes, the mirror must reside on the 3rd node only which does not have sufficient space for it. This is one of the reasons that the best practices of vSAN is to use 4-nodes cluster even though the minimum requirement is 3 nodes.
In summary, the reported raw free space is the combination of all free space on all Capacity Tier disks in all hosts which does not necessarily mean that all that space is usable.

Additional Information