Contents
Related KBs
vSphere Cluster Services
vCLS is a mandatory feature which is deployed on each vSphere cluster when vCenter Server is upgraded to Update 1 or after a fresh deployment of vSphere 7.0 Update 1. The ESXi hosts can be of any older version which is compatible with vCenter server 7.0 Update 1. For more information, see the vSphere Cluster Services (vCLS) section of the vSphere Resource Management Guide.
As explained in the documentation, there will be 1 to 3 vCLS VMs running on each vSphere cluster depending on the size of the cluster. vSphere DRS in a DRS enabled cluster will depend on the availability of at least 1 vCLS VM. Unlike your workload/application VMs, vCLS VMs should be treated like system VMs. Do not perform any operations on these VMs unless guided by VMware support or explicitly listed as supported operation in any documentation.
There is no way to disable vCLS on a vSphere cluster and still have vSphere DRS being functional on that cluster. However, should it be necessary, you can disable vCLS on a cluster by following the Retreat Mode steps, but this will impact some of the cluster services for that cluster.
Reference: How to Disable vCLS on a Cluster via Retreat Mode
This feature has two revisions. The first, introduced in vSphere 7.0 Update 1, is known as "External vCLS". It will be deprecated in future versions of vSphere. The second, introduced in vSphere 8.0 Update 3, is known as "Embedded vCLS". These versions serve the same overall purpose, but have different runtimes, leading to differences in behaviors and supported operations.
Size of the vCLS VMs
vSphere Cluster Service VMs are very small VMs compared to workload VMs. Each consumes 1 vCPU and 128 MB of memory and about 500 MB of storage. Below table shows the specification of these VMs:
Memory |
128 MB |
Memory Reservation |
100 MB |
Swap Size |
256 MB |
CPU |
1 |
CPU Reservation |
100 MHz |
Hard Disk |
2 GB |
Ethernet Adapter |
0 (It is a No NIC VM) |
VMDK Size |
-245 MB |
Storage Space |
-480 MB |
vCLS During Infrastructure Maintenance
- Cluster compute maintenance (more details here - Automatic power-off of vCLS VMs during maintenance mode)
- When there is only 1 host - vCLS VMs will be automatically powered-off when the single host cluster is put into Maintenance Mode, thus maintenance workflow is not blocked.
- When there are 2 or more hosts - In a vSphere cluster where there is more than 1 host, and the host being considered for maintenance has running vCLS VMs, then vCLS VMs will be migrated to other hosts if there are free resources and if they have storage connectivity (shared storage). If these VMs cannot be migrated for the lack of free available resource on other hosts or if these VMs are placed in a local datastore, then these VMs will be powered off automatically to give preference to the host Maintenance Mode operation. As stated before, vSphere DRS for a cluster will not be functional where there is not at least 1 vCLS VM running in that cluster.
- If you are decommissioning a cluster, then you have to put all the hosts into Maintenance Mode prior to deleting the cluster for proper clean-up of vCLS VMs. If you delete the cluster without placing the hosts in Maintenance Mode, there will be stale vCLS VMs running inside the hosts causing issues when these hosts with running VMs are added back to a new cluster.
- Disconnect Host - On the disconnect of Host, vCLS VMs are not cleaned from these hosts as they are disconnected are not reachable. New vCLS VMs will not be created in the other hosts of the cluster as it is not clear how long the host is disconnected. When disconnected host is connected back, vCLS VM in this disconnected host will be registered again to the vCenter inventory. If a disconnected host is removed from inventory, then new vCLS VMs may be created in other hosts of the clusters if Quorum is not reached.
- Datastore maintenance. For more information, see Impact of vSphere Cluster Services on storage workflows
Other VMware Product Interop
- SRM - Planned migration
SRM 8.3.1 is not supported with vSphere 7.0 update.
- VMware Aria Operations
- Capacity reclaim- Capacity optimization workflow of vRealize Operations Manager might detect vCLS VMs as idle VMs and might include them in the recommendations for reclaiming the capacity. If vCLS VMs are deleted as part of reclaim workflow, vCLS service will recreate these VMs back. There might be a time when vCLS status for that cluster might turn unhealthy if DRS runs prior to bringing the VM back up. For more information, see the Reclaim section of the VMware Aria Operations Documentation. The recommended option is to exclude these VMs from capacity reclaim workflow. These VMs can be identified by their names (vCLS) or by looking at additional properties as explained in the documentation.
- Cross cluster services - vRealize Operation Manager Workload Placement (WLP) workflows might be impacted if DRS is not functional on the cluster due to unhealthy vCLS, where WLP is recommending the placement of workloads.
- vRealize Automation
vCLS should not impact any partner workflows like Backup, monitoring etc., Since these VMs are managed by vCLS, there is no reason to configure backup on these VMs as restoring from backup in case of a recovery operation is not necessary or might fail. These VMs can be identified using APIs as listed above under “Identifying vCLS VMs” section.
- Products/solutions without any interop issues
- VMware Cloud Foundation - Cloud Builder and SDDC Manager will not have any impact, vRA, vROps and vSAN impact is addressed above
- NSX Data Center for vSphere
- NSX-T Data Center for vSphere
- vCPP
- vCD
- vCDA
- vXRail
- Horizon Enterprise
Partners Impact
vCLS should not impact any partner workflows like Backup, monitoring etc., Since these VMs are managed by vCLS, there is no reason to configure backup on these VMs as restoring from backup in case of a recovery operation is not necessary or might fail. These VMs can be differentiated via API with below additional properties for these VMs.
This is the most reliable way to identify a general vCLS VM. A previous version of this article also included using the VM's "ManagedByInfo
" to identify it as vCLS. This was not incorrect, but it only works for External vCLS, as Embedded vCLS uses a different value. For more extensive information on identifying vCLS VMs and differentiating their type, refer to Script Identification for Embedded vCLS has Changed Identifiers Including ManagedByInfo.