vCenter service restart fails with the below error:
Error: Failed to start services in profile ALL. RC=1, stderr=Failed to start trustmanagement, updatemgr, vsan-health, topologysvc, pschealth, analytics, vsm services. Error: Operation timed outRunning diagnostic tools like the vSphere Diagnostic Tool (VDT) fails with a "No space left on device" message.
The output of command "df -i" shows 100% usage for /storage/log partition whereas "df -h" output shows available space.
VMware vCenter Server 8.0
vCenter generates warnings regarding Inode consumption; if these warnings are ignored and Inodes are fully consumed, services will fail to start. This issue occurs due to Inode exhaustion (100% usage) on the /storage/log partition.
A potential cause is when multiple ESXi hosts are configured to forward syslog data directly to the vCenter Server. This creates a non-default directory (/storage/log/vmware/esx/) containing thousands of small sub-directories and files for each host in vCenter. Even if the total disk capacity (GB) is not full, the sheer volume of individual files instantly consumes all available Inodes, preventing vCenter services from starting.
NOTE: Before proceeding, take a full backup or offline snapshot of the vCenter Server(s). If using Enhanced Linked Mode, ensure all linked vCenter servers are backed up. Refer KB VMware vCenter in Enhanced Linked Mode pre-changes snapshot (online or offline) best practice
1. Log in to the vCenter Server via SSH and run the following command to check Inode usage:
df -i2. Run the following command from /storage/log to identify directories containing more than 100 files, sorted by count:
cd /storage/log && find ./ -type d -exec sh -c 'echo -n "{}: " && find "{}" -type f | wc -l' \; | awk '$2 > 100' | sort -k2,2nr3. There will likely be one path with an extremely high Inode count. Check the contents of the top directories.
If /storage/log/vmware/esx appears at the top of the list, this confirms the syslog misconfiguration. if that's the case, please follow the below given steps:
1. Stop the influx of new files before cleaning the partition, otherwise the Inodes will fill up again immediately.:
2. On the vCenter SSH session, remove the directory causing the exhaustion:
rm -rf /storage/log/vmware/esx3. Verify Inode space availability:
df -i4. Restart all vCenter services:
service-control --stop --all && service-control --start --allAnother possibility is when the /storage/log directory contains tens of thousands of 0-byte or stale .gz files (e.g., vmware-vsan-health-service-xxxxx.log.gz). if that's the case, remove them to free up Inode space by running the below command.
find . -name 'vmware-vsan-health-service-*.log.gz' | xargs rm -f