The virtual machines goes into inaccessible with an error "There is no more space for virtual disk <VM name>.vmdk. You might be able to continue this session by freeing disk space on the relevant volume and clicking retry"
search cancel

The virtual machines goes into inaccessible with an error "There is no more space for virtual disk <VM name>.vmdk. You might be able to continue this session by freeing disk space on the relevant volume and clicking retry"

book

Article ID: 326433

calendar_today

Updated On:

Products

VMware vSAN

Issue/Introduction

  • The vSAN datastore reports sufficient total free space to provision the virtual machine.
    |

  • Below mentioned warning will be displayed on VM summary
    There is no more space for virtual disk <VM name>.vmdk. You might be able to continue this session by freeing disk space on the relevant volume and clicking retry. Click Cancel to terminate this session.


Environment

VMware vSAN 6.x
VMware vSAN 7.x
VMware vSAN
8.x

Cause

The vSAN cluster has reached critical space utilization of more than 93%. While cluster-wide space appeared available, individual capacity disks had exhausted space, leading to failure in critical VM operations like power-on, snapshot handling, or I/O.
 

Cause Justification: 

  • The var/run/log/vmkernel.log file reports “No space left on device” error:
    YYYY-MM-DDTHH:MM.SSSZ In(182) vmkernel: cpu54:2097677)FS3DM: 3012: status No space left on device zeroing 1 extents(1048576 each)
    YYYY-MM-DDTHH:MM.SSSZ In(182) vmkernel: cpu54:2097677)FS3J: 3320: Cancelling txn (0x43144b284200) callerID: 0xc1d00006 due to failurepre-committing: No space left on device
    YYYY-MM-DDTHH:MM.SSSZ In(182) vmkernel: cpu70:2097677)BC: 608: write to vmware.log (f530 28 3 6731d29c 1a441f2a df48295e 40a99337 1ec053c4 44ce 0 0 0 0 0) 173896 bytes failed: No space left on device

  • Validate the disk utilization in the cluster for each ESXi hosts

    Command to check disk utilization on hosts. 

    l() { python -c "print('-' * 115)" ; };cmmds-tool find -t HOSTNAME -f json | egrep "uuid|hostname" | sed -e 's/\"content\"://g' | awk '{print $2}' | sed -e 's/[\",\},\,]//g' | xargs -n 2 | while read hostuuid hostname; do l;echo -e " hostname: $hostname --- host UUID: $hostuuid"; l;echo -e " Disk Name\t\t| Disk UUID\t\t                | Disk Usage     | Disk Capacity | Usage Percentage";l; cmmds-tool find -f json -t DISK -o $hostuuid | egrep "uuid|content" |  awk 'NR%2{printf "%s ",$0;next;}1' | sed -e 's/[\",\},\]//g' | awk '{printf $0}' | sed -e "s/uuid: /\n uuid: /g" | grep -v "maxComponents: 0"|  awk '{print $37 " " $2 " " $5}' | while read disknaa diskuuid diskcap; do if [ $diskcap > 0 ]; then diskcapused=$(cmmds-tool find -f json -t DISK_STATUS -u $diskuuid |grep content |sed -e 's/[\",\},\]//g' | awk '{print $3}'); diskperc=$(echo "$diskcapused $diskcap" | awk '{print $1/$2*100}') ; echo -en " $disknaa\t| $diskuuid\t| $diskcapused\t | $diskcap\t | $diskperc %\n"; fi ; done ; done; l

    Example of the few disks are listed below:
     Disk Name              | Disk UUID                             | Disk Usage     | Disk Capacity         | Usage Percentage
     naa.################:2 | ########-####-####-####-############  | 1800343629332  | 1800350466048         | 99.9996 %
     naa.################:2 | ########-####-####-####-############  | 1800343629332  | 1800350466048         | 99.9996 %
     naa.################:2 | ########-####-####-####-############  | 1770765397524  | 1800350466048         | 98.3567 %

Resolution

Solution Recommendation: 

  • Increase vSAN capacity to ensure vSAN datastore have enough free space to handle the objects.
    • Add additional capacity to the vSAN cluster.
    • Expand existing disk groups by adding disks.
    • Add new ESXi hosts with disks and create disk groups on them.

Work around to resolve the issue immediately. 

  • Free up existing space in the vSAN datastore:

Additional Information

In the vSAN default storage policy:
  • Failures To Tolerate (FTT) = 1
  • Stripes per object = 1
  • Object Space Reservation = 0
The reported raw free space usable is less than double the new object size plus some small metadata overhead. Attempts to create an object larger than this size, fails with above outlined error.
 
For example:
  • Raw free capacity = 3000 GB.
  • Required VM disk size = 250 GB.
  • Required raw free space = 500 GB + overhead.
However, because Object Space Reservation is 0 (thin provisioned), you may be able to over-commit the capacity. This will introduce a risk of running out of capacity tier space as the provisioned objects grow.
 
Ensure that using the default policy, an object that is less than or equal to 255 GB in size will have 2 components (replicas) placed on separate vSAN hosts. The components for this object are not split into smaller chunks because the Stripe Width is 1. However, if the smallest free space available on any capacity tier disks is less than object size, in this case, CLOMD will attempt one or more iterations to split replicas into smaller equal stripes that best fit on the free space of the capacity tier disks.
 

Work around to resolve the issue immediately. 

Free up existing space 

  1. Identify and delete non-critical or unused VMs/disks

  2.  Migrate selected VMs to another cluster or non-vSAN datastore.

     3.  Cleanup unassociated objects after checking the pre-requisites in below document.

          Procedures for identifying Unassociated vSAN objects.