VMware vSphere 7.x provides flexibility and choice in architecting high-availability solutions using Windows Server as guest Operating Systems (OS). VMware vSphere 7.x supports Windows Server Failover Cluster (WSFC) with shared (clustered) disk resources by transparently passing to the underlying storage or emulating on the datastore level SCSI-3 Persistent Reservations (SCSI3-PRs) SCSI commands, required for a WSFC node (VM participating in a WSFC, further references as VM node) to arbitrate access to a shared disk. It is a general best practice to ensure that each node participating in a WSFC has the same configuration.
Note: Clustered (“shared”) VMDKs are now available with vSphere 7.0
Information in this article is applicable to configurations when VMs hosting nodes of a WSFC (VM nodes) are located on different ESXi hosts – known as “Cluster-across-box (CAB)”. CAB provides High Availability (HA) both from the in-guest and vSphere environment perspective. VMware does not recommend configurations, where all VM nodes are placed on a single ESXi host (so called “cluster-in-a-box”, or CIB). The CIB solution should not be used for any production implementations – if a single ESXi host will fail, all cluster nodes will be powered off and, as the result, your application will experience downtime.
This KB assumes that all components underneath the WSFC-based VMs are architected to provide the proper availability and conform to Microsoft’s supportability as it relates to the in-guest configuration of this article.
Note: A single WSFC consisting of both physical nodes and virtual machine is supported. For more information, see Cluster Physical and Virtual Machines section in the Setup for Windows Server Failover Clustering.
This article provides guidelines and vSphere support status for guest deployments using Microsoft Windows Server Failover Clusters (WSFCs) with shared disk resources across nodes in CAB configuration on VMware vSphere 7.x.
Table 1 shows supported versions of Windows OS and VMware vSphere, being qualified by VMware. VMware does not impose any limitations nor requires a certification for application using WSFC on a supported Windows platform, therefore, any application running on a supported combination of vSphere and Windows OS is supported with no additional considerations.
Note: Other WSFC-based solutions, not accessing shared disks (Microsoft SQL Server Always on Availability Groups (AGs) or Microsoft Exchange Database Availability Groups (DAG) require no special storage configurations on the vSphere side (VMFS or NFS). This kb should not be used for such configurations.
Table 1. Versions of Windows Server Supported by vSphere for a WSFC
Windows Server Version1 |
Minimum vSphere version |
Maximum Number of WSFC Nodes with Shared Storage Supported by ESXi |
2022 | vSphere 7.0 | 5 |
2019 |
vSphere 7.0 |
5 |
2016 |
vSphere 7.0 |
5 |
2012 / 2012 R2 |
vSphere 7.0 |
5 |
SQL Server 2016, 2017 and 2019 Failover Cluster Instances (FCI) were used to validate the WSFC functionality on vSphere and Windows Server versions listed in this table.
If the cluster validation wizard completes with the warning:” Validate Storage Spaces Persistent Reservation,” you can safely ignore the warning. This check applies to the Microsoft Storage Spaces feature, which does not apply to VMware vSphere.
The following VMware vSphere features are supported for WSFC:
The following VMware vSphere features are NOT supported for WSFC:
Live vMotion (both user- or DRS-initiated) of VM nodes is supported in vSphere 7x under the following requirements:
vSphere options for presenting a shared storage to a VM node are shown in the Table 2.
Table 2. Supported storage configuration options
vSphere version |
Shared disk options |
SCSI bus sharing |
vSCSI Controller type |
Storage Protocol / Technology |
vSphere 7.0 |
Clustered VMDKs |
physical |
VMware Paravirtual (PVSCSI), LSI Logic SAS |
FC |
vSphere 7.0 |
VMware vSphere® Virtual Volumes (vVols), RDM physical mode |
physical |
VMware Paravirtual (PVSCSI), LSI Logic SAS |
FC, FCoE, iSCSI |
VMware Cloud on AWS |
Clustered VMDKs |
physical |
VMware Paravirtual (PVSCSI), LSI Logic SAS |
N/A / vSAN |
vSAN (vSphere 7.0) |
Clustered VMDKs |
physical |
VMware Paravirtual (PVSCSI), LSI Logic SAS |
N/A / vSAN |
Requirements
VMware ESXi, VMware vCenter®, VMware vSphere VMFS
Require the datastore capability Clustered VMDK support to be enabled.
Note: Perennial reservations are required on all datastores with Clustered VMDK support enabled.
See Clustered vmdk datastore LUN check.
Maximum number of 128 clustered VMDKs per ESXi host.
Supports up to three (3) WSFC clusters (i.e., multi-cluster) running on the same ESXi host.
Mixing of clustered VMDKs and other types of clustered disks (e.g., pRDMs, vVol) in a single VM is not supported.
Placing all VMs, nodes of a WSFC on the same ESXi host (i.e. Cluster-in-a-Box (CiB) is not supported.
VMs, nodes of a WSFC, must be placed on different ESXi hosts (i.e. Cluster Across Boxes (CAB)). The placement must be enforced with DRS MUST Anti-Affinity Rules.
Change/increase in the GOS the WSFC Parameters.
(get-cluster -name <cluster-name>).QuorumArbitrationTimeMax = 60
(get-cluster -name <cluster-name>).SameSubnetThreshold = 10
(get-cluster -name <cluster-name>).CrossSubnetThreshold = 20
(get-cluster -name <cluster-name>).RouteHistoryLength = 40
Only datastores accessible via FC are currently supported.
Non-SCSI back-end or non-SCSI virtual front-end (e.g., NVMe, vNVMe) are not supported.
SCSI-2 Reservations will not be supported on clustered disks/datastores.
RDMs used as clustered disk resources must be added using physical compatibility mode.
Virtual SCSI Controllers
Mixing non-shared and shared disks.
Mixing non-shared and shared disks on a single virtual SCSI adapter is not supported. For example, if the system disk (drive C:) is attached to SCSI0:0, the first clustered disk would be attached to SCSI1:0. A VM node of a WSFC has the same virtual SCSI controller maximum as an ordinary VM - up to four (4) virtual SCSI Controllers.
Modify advanced settings for a virtual SCSI controller hosting the boot device.
Add the following advanced settings to the VMs node:
scsiX.returnNoConnectDuringAPD = "TRUE"
scsiX.returnBusyOnNoConnectStatus = "FALSE"
Where X is the boot device SCSI bus controller ID number. By default, X is set to 0.
Virtual disks SCSI IDs should be consistent between all VMs hosting nodes of the same WSFC.
vNVMe controller is not supported for clustered and non-clustered disk (for example, a boot disk must NOT be placed on vNVMe controller). Check this Configuring disks to use VMware Paravirtual SCSI (PVSCSI) controllers for more details on how to change a controller for the boot disk:
Multi-writer flag must NOT be used.
Use VMware Paravirtual virtual SCSI controller for the best performance.
For the best performance consider distributing disks evenly between as much SCSI controllers as possible and use VMware Paravirtual (PVSCSI) controller (provides better performance with lower CPU usage and is a preferred way to attach clustered disk resources).
Note:
Perennial reservations on the clustered VMDK enabled datastores are required for all ESXi hosts hosting VM nodes with pRDMs and Clustered VMDKs. Check ESXi host takes a long time to start during rescan of RDM LUNs for more details.
In-guest shared storage configuration options.
Maintaining in guest options for storage (such as iSCSI or SMB shares) is up to those implementing the solution and is not visible to ESXi.
VMware fully supports a configuration of WSFC using in-guest iSCSI initiators or in-guest SMB (Server Message Block) protocol, provided that all other configuration meets the documented and supported WSFC configuration. Using this configuration in VMware virtual machines is similar to using it in physical environments.
Note:
vMotion has not been tested by VMware with any in-guest shared storage configurations.
VM Limitations when hosting a WSFC with shared disk on vSphere.
Hot changes to virtual machine hardware might disrupts the heartbeat between the WSFC nodes These activities are not supported and shall cause WSFC node failover:
A shared disk resource provided by pRDM can be extended online or offline.
Scsi controller bus sharing should be set to "physical" for SRDF (Symmetrix Remote Data Facility) kind of configurations.
Note:
For more information on VM configuration limitations, see the vSphere WSFC Setup Limitation section in the vSphere Resource Management Guide .
Microsoft support policies for a virtualized deployment of WSFC
Microsoft support deployment of WSFC on a VM. Also check the Microsoft KB article Support policy for Microsoft SQL Server products that are running in a hardware virtualization environment.
For other vSphere versions, see the master Guidelines for Microsoft Clustering on vSphere for links to relevant articles.
Hosting WSFC natively on vSAN – Check VMware techzone .
Migrating a WSFC from RDMs to VMDKs – Check this guide.
Disclaimer: VMware is not responsible for the reliability of any data, opinions, advice, or statements made on third-party websites. Inclusion of such links does not imply that VMware endorses, recommends, or accepts any responsibility for the content of such sites.