Microsoft Windows Server Failover Clustering on VMware vSphere 5.x: Guidelines for supported configurations
search cancel

Microsoft Windows Server Failover Clustering on VMware vSphere 5.x: Guidelines for supported configurations

book

Article ID: 340877

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

VMware provides customers additional flexibility and choice in architecting high-availability solutions. Microsoft has clear support statements for its clustering solutions on VMware.

Additionally, VMware provides guidelines in terms of storage protocols and number of nodes supported by VMware on vSphere, particularly for specific clustering solutions that access shared storage. Other clustering solutions that do not access shared storage, such as Exchange Cluster Continuous Replication (CCR) and Database Availability Group (DAG), can be implemented on VMware vSphere just like on physical systems without any additional considerations.

This article provides guidelines and vSphere support status for running various Microsoft Windows Server Failover Clustering (WSFC) solutions and configurations.


Environment

VMware vSphere ESXi 5.5
VMware vSphere ESXi 5.1
VMware ESXi 4.1.x Installable
VMware ESXi 4.0.x Installable
VMware vSphere ESXi 6.5
VMware ESXi 4.0.x Embedded
VMware vSphere ESXi 7.0.0
VMware ESXi 4.1.x Embedded
VMware ESXi 3.5.x Installable
VMware ESXi 3.5.x Embedded
VMware vSphere ESXi 6.0
VMware vSphere ESXi 6.7
VMware vSphere ESXi 5.0

Resolution

VMware vSphere support for Microsoft clustering solutions on VMware products
 

This table outlines VMware vSphere support for Microsoft clustering solutions:

 


Clustering on
VMware

 

vSphere
support

 

VMware
HA
support

 

vMotion
DRS
support

 

Storage
vMotion
support

 

MSCS
Node
Limits

 

Storage Protocols support

 

Shared Disk

 

FC

 

In-Guest
OS iSCSI

 

Native
iSCSI

 

In-Guest OS SMB

 

FCoE

 

NFS

 

RDM

 

VMFS

 

Shared
Disk

 

MSCS with
Shared Disk

 

Yes

 

Yes 1

 

No

 

No

 

2,
5 (5.1 and 5.5)

 

Yes 7

Yes

 

Yes 6

 

Yes 5

 

Yes 4

 

No

 

Yes 2

 

Yes 3

 

Exchange Single
Copy Cluster

 

Yes

 

Yes 1

 

No

 

No

 

2,
5 (5.1 and 5.5)

 

Yes 7

Yes

 

Yes 6

 

Yes 5

 

Yes 4

 

No

 

Yes 2

 

Yes 3

 

SQL Clustering

 

Yes

 

Yes 1

 

No

 

No

 

2,
5 (5.1 and 5.5)

 

Yes 7

Yes

 

Yes 6

 

Yes 5

 

Yes 4

 

No

 

Yes 2

 

Yes 3

 

Non
shared
Disk

 

Network Load
Balance

 

Yes

 

Yes 1

 

Yes

 

Yes

 

Same as
OS/app

 

Yes

 

Yes

 

Yes

 

N/A

 

N/A

 

No

 

N/A

 

N/A

 

Exchange CCR

 

Yes

 

Yes 1

 

Yes

 

Yes

 

Same as
OS/app

 

Yes

 

Yes

 

Yes

 

N/A

 

N/A

 

No

 

N/A

 

N/A

 

Exchange DAG

 

Yes

 

Yes 1

 

Yes

 

Yes

 

Same as
OS/app

 

Yes

 

Yes

 

Yes

 

N/A

 

N/A

 

No

 

N/A

 

N/A

 

SQL AlwaysOn
Availability
Group

 

Yes

 

Yes 1

 

Yes

 

Yes

 

Same as
OS/app

 

Yes

 

Yes

 

Yes

 

N/A

 

 

 

N/A

 

N/A

 

Table 1 Notes :
  • When DRS affinity/anti-affinity rules are used. For more information, see the HA/DRS specific configuration for clustered virtual machines section in this article.
  • For details on shared disk configurations, see the Disk Configurations section in this article.
  • Supported in Cluster in a Box (CIB) configurations only. For more information, see the Considerations for Shared Storage Clustering section in this article.
  • In vSphere 5.5, native FCoE is supported. In vSphere 5.1 Update 2, a two-node cluster using FCoE with Windows Server 2008 and Windows Server 2012 is supported. In vSphere 5.1 Update 1 and 5.0 Update 3, two-node cluster configurations with Cisco CNA cards (VIC 1240/1280) and driver version 1.5.0.8 are supported on Windows 2008 R2 SP1 64-bit guest operating system. For more information, see the VMware Hardware Compatibility guide:
     
    • Cisco UCS VIC1240
    • Cisco UCS VIC1280
       
  • Windows Server 2012 and 2012-R2 Failover Clustering only.
  • vSphere 5.5 only.
  • In vSphere 5.1 Update 2, up to a five-node cluster with FibreChannel is supported with Windows Server 2012.
Notes:
  • MSCS (Windows Server Failover Clustering (WSFC)) is not supported with VMware vSAN version .5.5
  • For vSphere 5.5 MSCS support enhancements, see MSCS support enhancements in vSphere 5.5 (2052238) .
  • Microsoft Windows Server Failover Clustering (WSFC) virtual machines use a shared Small Computer System Interface (SCSI) bus. Any virtual machine using a shared bus cannot make hot changes to virtual machine hardware as this disrupts the heartbeat between the WSFC nodes. These activities are not supported and cause WSFC node failover:
    • vMotion migration
    • Increasing the size of disks
    • Hot adding memory
    • Hot adding CPU
    • Using snapshots
    • Pausing and/or resuming the virtual machine state
    • Memory over-commitment leading to virtual swapping or memory ballooning

      Note : For more information on MSCS limitations, see the vSphere MSCS Setup Limitation section in the vSphere Resource Management Guide .

       
    • For the purpose of this document, VMware does not consider SQL Mirroring as a clustering solution. VMware fully supports SQL Mirroring on vSphere, with no specific restrictions.
    • SQL Server AlwaysOn Availability Group on vSphere are supported only for non-shared Disk configurations. However, the system disk's VMDK must not be located on a NFS datastore.
    • WSFC clusters are supported for VMware Paravirtual SCSI Controllers on vSphere 5.5 Update 3.
    • Storage vMotion and vMotion migrations for Shared Disks configurations are not supported and fails when the migration is attempted. For more information, see Troubleshooting migration compatibility error: Device is a SCSI controller engaged in bus-sharing (1003797)</u> .
    • ESXi 5.1 and 5.5 support up to five-node clusters for Windows Server 2008 SP2 and later, but earlier ESXi versions support only two-node clusters. For more information, see the Setup for Failover Clustering and Microsoft Cluster Service Guide .
    • A Microsoft cluster consisting of both physical Windows server nodes and virtual machine nodes is supported. For more information, see the Cluster Physical and Virtual Machines section in the Setup for Failover Clustering and Microsoft Cluster Service Guide .
    • Microsoft SQL Server AlwaysOn Failover Clustering Instance (FCI) is supported on VMware vSphere under these conditions:
       
      • If the FCI nodes are hosted on separate ESXi hosts ( Cluster across Boxes configuration), the shared disks presented to the FCI nodes must be configured as Raw Device Mapping (RDM) disks attached to virtual SCSI controllers in physical compatibility mode.
      • vMotion operation is not supported for this type of FCI configuration on vSphere 5.x.
         
    • vMotion of legacy SQL clustering options (Microsoft Clustering Service) in SQL Server versions prior to SQL Server 2012 has not been tested or validated by VMware. Consequently, VMware does not support vMotion of virtual machines configured with MSCS.
    • Storage vMotion is not supported with physical mode RDM in any version of vSphere.
       

To avoid unnecessary cluster node failovers due to system disk I/O latency, virtual disks must be created using the EagerZeroedThick format on VMFS volumes only, regardless of the underlying protocol.

Note: Although EagerZeroedThick VMDKs can be created on VAAI-capable NAS arrays using a suitable VAAI NAS plug-in, NFS is not a supported storage protocol with Microsoft Clustering.

Commonly used Microsoft clustering solutions

These are common Microsoft clustering solutions used by VMware users in virtual machines:
  • Microsoft Clustering Services : MSCS or Microsoft Windows Server Failover Clustering (WSFC) is a clustering function that provides failover and availability at the operating system level. Commonly clustered applications include:
     
    • Microsoft Exchange Server
    • Microsoft SQL Server
    • File and print services
    • Custom applications
       
  • Microsoft Network Load Balance (I/O Load Balance) : Microsoft Network Load Balance (NLB) is suited for stateless applications or Tier 1 of multi-tiered applications, such as web servers providing a front end for back end database and application servers. A physical alternative is an appliance, such as those available from F5.

Note: Sharing RDMs between virtual machines without a clustering solution is not supported.
 

VMware vSphere support for running Microsoft clustered configurations
 

This table outlines VMware vSphere support for running Microsoft clustered configurations:

 Support
Status
Clustering VersionNotes
MSCS with
shared disk
SupportedWindows Server 20031
Windows Server 2008
Windows Server 2008 R2
Windows Server 2012 2
Windows Server 2012 R25
See additional considerations
Network
Load Balance
SupportedWindows Server 2003 SP2
Windows Server 2008
Windows 2008 R2
Windows Server 2012
Windows Server 2012 R2
 
SQL clusteringSupportedWindows Server 20031
Windows Server 2008
Windows 2008 R2
Windows Server 2012 2
Windows Server 2012 R25
See additional considerations
SQL AlwaysOn
Availability
Group (Non-shared Disks)
SupportedWindows Server 2008 SP2 or higher
Windows Server 2008 R2 SP1 or higher
Windows Server 2012
Windows Server 2012 R24
 
Exchange
Single copy
cluster
SupportedExchange 20031
Exchange 2007
See additional considerations
Exchange CCRSupportedWindows 20031
Windows 2008 SP1 or higher
Exchange 2007 SP1 or higher
 
Exchange DAGSupportedWindows 2008 SP2 or higher
Windows 2008 R2 or higher
Windows Server 2012
Windows Server 2012 R24
Exchange 2010
Exchange 2013
 
 
Table 2 Notes:
  1. This table lists the support status by VMware on vSphere. Check with your vendor as the status of third-party software vendor support may differ. For example, while VMware supports configurations using MSCS on clustered Windows Server 2003 virtual machines, Microsoft does not support it. The same applies for the support status of the operating system version. Support for software that has reached end-of-life may be limited or non-existent depending on the life cycle policies of the respective software vendor. VMware advises against using end-of-life products in production environments.
  2. Supported only with in-guest SMB and in-guest iSCSI for vSphere 5.1 and earlier. This restriction does not apply to vSphere 5.1 Update 2 and 5.0 Update 3. (See relevant footnotes under preceding table 1)
  3. In-guest clustering solutions that do not use a shared-disk configuration, such as SQL Mirroring, SQL Server AlwaysOn Availability Group (Non-shared Disk), and Exchange Database Availability Group (DAG), do not require explicit support statements from VMware. However the system disk's VMDK must not be located on NFS datastore,
  4. vSphere 5.0 Update 1 and later, 5.1 Update 1 and later, and 5.5 and later (where Guest OS is supported)
  5. vSphere 5.5 Update 2 and later.
Additional Notes:
  • System disk ( C: drive) virtual disks can be on local VMFS or SAN-based VMFS datastores only, regardless of the underlying protocol. System disk virtual disks must be created with the EagerZeroedThick format. For more information, see the Setup for Failover Clustering and Microsoft Cluster Service guide for ESXi 5.0.
  • In Windows Server 2012 and 2012 R2, cluster validation completes with this warning: Validate Storage Spaces Persistent Reservation . You can safely ignore this warning.
For support information on Microsoft clustering for MSCS, SQL, and Exchange, go to the  Windows Server Catalog  and select the appropriate dropdown.

Note: The preceding link was correct as of March 31, 2015. If you find the link is broken, provide feedback and a VMware employee will update the link.

Windows Server 2012 failover clustering is not supported with ESXi-provided shared storage (such as RDMs or virtual disks) in vSphere 5.1 and earlier. For more information, see the Miscellaneous Issues section of the
vSphere 5.1 Release Notes . VMware vSphere 5.5 provides complete support for Windows Server 2012 failover clustering. VMware vSphere 5.5 Update 1 provides complete support for Windows 2012 R2 failover clustering.

For more information, see these Microsoft Knowledge Base articles:

Note: The preceding links were correct as of February 11, 2014. If you find a link is broken, provide feedback and a VMware employee will update the link.

Considerations for shared storage clustering


Storage protocols
  • Fibre Channel: In vSphere 5.1 and earlier, configurations using shared storage for Quorum and/or Data must be on Fibre Channel (FC) based RDMs (physical mode for cluster across boxes CAB , virtual mode for cluster in a box CIB ). RDMs on storage other than FC (such as NFS or iSCSI) are not supported in 5.1 and earlier. In vSphere 5.5, Quorum or data can be on both iSCSI or FCoE as well. Virtual disk based shared storage is supported with CIB configurations only and must be created using the EagerZeroedThick option on VMFS datastores.
  • Native iSCSI (not in the guest OS): Supported in vSphere 5.5. VMware does not support the use of ESXi/ESX host iSCSI initiators, also known as native iSCSI (hardware or software) with MSCS in vSphere 5.1 or earlier.
  • In-guest iSCSI software initiators: VMware fully supports a configuration of MSCS using in-guest iSCSI initiators, provided that all other configuration meets the documented and supported MSCS configuration. Using this configuration in VMware virtual machines is relatively similar to using it in physical environments. vMotion has not been tested by VMware with this configuration.
  • In-guest SMB (Server Message Block) protocol : VMware fully supports a configuration of MSCS using in-guest SMB, provided that all other configuration meets the documented and supported MSCS configuration. Using this configuration in VMware virtual machines is relatively similar to using it in physical environments. vMotion has not been tested by VMware with this configuration.
  • FCoE : FCoE is fully supported in vSphere 5.5. However, in earlier versions, FCoE is only supported in very specific configurations. For more information, see note 4 in the preceding Microsoft clustering solutions table.
Virtual SCSI adapters

Shared storage must be attached to a dedicated virtual SCSI adapter in the clustered virtual machine. For example, if the system disk (drive C: ) is attached to SCSI0:0 , the first shared disk would be attached to SCSI1:0 , and the data disk attached to SCSI1:1 .

The shared storage SCSI adapter for Windows Server 2008 and later must be the LSI Logic SAS type, while earlier Windows versions must use the LSI Logic Parallel type. (For Paravirtual SCSI Controllers, see note under Table 1 above).


Disk configurations
 
  • RDM : Configurations using shared storage for Quorum and/or Data must be on Fibre Channel (FC) based RDMs (physical mode for cluster across boxes CAB , virtual mode for cluster in a box CIB ) in vSphere 5.1 and earlier. RDMs on storage other than FC (iSCSI and FCoE) are only supported in vSphere 5.5. However, in earlier versions, FCoE is supported in very specific configurations. For more information, see note 4 in the preceding Microsoft clustering solutions table.
  • VMFS : Virtual disks used as shared storage for clustered virtual machines must reside on VMFS datastores and must be created using the EagerZeroedThick option. This can be done using the vmkfstools command from the console, the vSphere CLI, or from the user interface. For more information, see the Setup for Failover Clustering and Microsoft Cluster Service guide for ESXi 5.0 .

    To create EagerZeroedThick storage with the
    vmkfstools command:
     
    1. Log in to the console of the host or launch the VMware vSphere CLI.
    2. myVMData.vmdk , run the command:

      Using the console:

      vmkfstools –d eagerzeroedthick –c 10g /vmfs/volumes/datastore1/myVM/myVMData.vmdk

      Note : Replace 10g with the desired size.

      Using the vSphere CLI:

      vmkfstools.pl –-server ESXHost –-username username --password passwd –d eagerzeroedthick –c 10g /vmfs/volumes/datastore1/myVM/myVMData.vmdk

       
  • To create EagerZeroedThick storage with the user interface:
     
    1. Using the vSphere Client, select the virtual machine for which you want to create the new virtual disk.
    2. Edit Settings.
    3. From the virtual machine properties dialog box, click Add to add new hardware.
    4. In the Add Hardware dialog box, select Hard Disk from the device list.
    5. Select Create a new virtual disk and click Next.
    6. Select the disk size you want to create.
    7. Specify a datastore and browsing to find the desired datastore.
    8. To create an EagerZeroedThick disk, select Support clustering features such as Fault Tolerance .

      Note : Step 8 must be the last configuration step. Changes to datastores after selecting Support clustering features such as Fault Tolerance cause it to become deselected.
       
    9. Complete the wizard to create the virtual disk.
Non-shared storage clustering

Non-shared storage clustering refers to configurations where no shared storage is required to store the application's data or quorum information. Data is replicated to other cluster nodes (for example, CCR) or distributed among the nodes (for example, DAG).

These configurations do not require additional VMware considerations regarding a specific storage protocol or number of nodes, and can be deployed on virtual in the same way as physical.


Notes:

 

HA/DRS specific configuration for clustered virtual machines
 

Affinity/Anti-affinity rules

For virtual machines in a cluster, you must create virtual machine to virtual machine affinity or anti-affinity rules. Virtual machine to virtual machine affinity rules specify which virtual machines should be kept together on the same host (for example, a cluster of MSCS virtual machines on one physical host). Virtual machine to virtual machine anti-affinity rules specify which virtual machines should be kept apart on different physical hosts (for example, a cluster of MSCS virtual machines across physical hosts).

For a cluster of virtual machines on one physical host, use affinity rules. For a cluster of virtual machines across physical hosts, use anti-affinity rules. For more information, see the
Setup for Failover Clustering and Microsoft Cluster Service guide for ESXi 5.0 .

To configure affinity or anti-affinity rules:
  • In the vSphere Client, right-click the cluster in the inventory and click Edit Settings .
  • In the left pane of the Cluster Settings dialog under VMware DRS, click Rules .
  • Click Add .
  • In the Rule dialog, enter a name for the rule.
  • From the Type dropdown, select a rule:
     
    • For a cluster of virtual machines on one physical host, select Keep Virtual Machines Together.
    • For a cluster of virtual machines across physical hosts, select Separate Virtual Machines.
       
  • Click Add.
  • Select the two virtual machines to which the rule applies and click OK .
  • Click OK .

Multipathing configuration
 

Path Selection Policy (PSP)

Round Robin PSP is not supported for LUNs mapped by RDMs used with shared storage clustering in vSphere 5.1 and earlier. If you choose to use Round Robin PSP with your storage arrays, or if the vSphere version in use defaults to Round Robin PSP for the array in use, you may change the PSP claiming the RDM LUNs to another PSP. For more information, see Changing a LUN to use a different Path Selection Policy (PSP) (1036189) .

With native multipathing (NMP) in vSphere versions prior to 5.5, clustering is not supported when the path policy is set to Round Robin. For more information, see vSphere MSCS Setup Limitations in the
Setup for Failover Clustering and Microsoft Cluster Service guide for ESXi 5.0.

In vSphere 5.5, Round Robin PSP (PSP_RR) support is introduced. For more information, see
MSCS support enhancements in vSphere 5.5 (2052238) .

Storage/SAN Compatibility Guide .


Path Selection Policy (PSP) using third-party Multipathing Plug-ins (MPPs)

N+1 cluster configuration occurs when cluster nodes on physical machines are backed by nodes in virtual machines (that is, one node in each cluster nodes pair is in a virtual machine). In this configuration, the physical node cannot be configured with multipathing software. For more information, see your third-party vendor's best practices and support.

Failover Clustering and Microsoft Cluster Service setup guides

For more information, see the guide for your version:

 


Additional Information

For other vSphere versions, see master KB 1037959 for links to relevant articles.VMware vSphere 5.x 上的 Microsoft Windows Server 故障切换群集:支持的配置指南
VMware vSphere 5.x での Microsoft Windows Server フェイルオーバ クラスタ:サポートされる構成のためのガイドライン