Microsoft Windows Server Failover Clustering on VMware vSphere 4.x: Guidelines for supported configurations
search cancel

Microsoft Windows Server Failover Clustering on VMware vSphere 4.x: Guidelines for supported configurations

book

Article ID: 342288

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Symptoms:

VMware provides customers additional flexibility and choice in architecting high-availability solutions. Microsoft has clear support statements for its clustering solutions on VMware.

Additionally, VMware provides guidelines in terms of storage protocols and number of nodes supported by VMware on vSphere, particularly for specific clustering solutions that access shared storage. Other clustering solutions that do not access shared storage, such as Exchange Cluster Continuous Replication (CCR) and Database Availability Group (DAG), can be implemented on VMware vSphere just like on physical systems without any additional considerations.

This article provides guidelines and vSphere support status for running various Microsoft clustering solutions and configurations.

Note: References to MSCS throughout this article also apply to Windows Server Failover Cluster (WSFC) on relevant Windows Server versions.


Environment

VMware ESXi 4.1.x Embedded
VMware ESXi 4.0.x Installable
VMware ESXi 4.1.x Installable
VMware ESXi 4.0.x Embedded

Resolution

VMware vSphere support for Microsoft clustering solutions on VMware products
 
This table outlines VMware vSphere support for Microsoft clustering solutions:
 
 Microsoft
Clustering on
VMware
vSphere
support
VMware
HA
support
vMotion
DRS
support
Storage
vMotion
support
MSCS
Node
Limits
Storage Protocols supportShared Disk
FCIn-Guest
OS iSCSI
Native
iSCSI
In-Guest OS SMBFCoENFSRDMVMFS
Shared
Disk
MSCS with
Shared Disk
YesYes 1NoNo2YesYesNoYes 4NoNoYes 2Yes 3
Exchange Single
Copy Cluster
YesYes 1NoNo2YesYesNoYes 4NoNoYes 2Yes 3
SQL ClusteringYesYes 1NoNo2YesYesNoYes 4NoNoYes 2Yes 3
Non
shared
Disk
Network Load
Balance
YesYes 1YesYesSame as
OS/app
YesYesYesN/AN/ANoN/AN/A
Exchange CCRYesYes 1YesYesSame as
OS/app
YesYesYesN/AN/ANoN/AN/A
Exchange DAGYesYes 1YesYesSame as
OS/app
YesYesYesN/AN/ANoN/AN/A
SQL AlwaysOn
Availability
Group
YesYes 1YesYesSame as
OS/app
YesYesYesN/ANoNoN/AN/A

Table 1 Notes:
  1. When DRS affinity/anti-affinity rules are used. For more information, see the HA/DRS specific configuration for clustered virtual machines section in this article.
  2. For details on shared disk configurations, see the Disk Configurations section in this article.
  3. Supported in Cluster in a Box (CIB) configurations only. For more information, see the Considerations for Shared Storage Clustering section in this article.
  4. Windows Server 2012 Failover Clustering only.
Notes:
  • Microsoft Clustering Services (MSCS) virtual machines use a shared Small Computer System Interface (SCSI) bus. Any virtual machine using a shared bus cannot make hot changes to virtual machine hardware as this disrupts the heartbeat between the MSCS nodes. These activities are not supported and cause MSCS node failover:
     
    • vMotion migration
    • Increasing the size of disks
    • Hot adding memory
    • Hot adding CPU
    • Using snapshots
    • Pausing and/or resuming the virtual machine state
    • Memory over-commitment leading to virtual swapping or memory ballooning

      Note: For more information on MSCS limitations, see the vSphere MSCS Setup Limitation section in the vSphere Resource Management Guide.
       
  • For the purpose of this document, VMware does not consider SQL Mirroring as a clustering solution. VMware fully supports SQL Mirroring on vSphere, with no specific restrictions.
  • SQL Server AlwaysOn Availability Group on vSphere are supported only for non-shared Disk configurations. However the system disk's VMDK must not be located on a NFS datastore.
  • MSCS clusters are not supported for VMware Paravirtual SCSI Controllers on vSphere 4.x.
  • Storage vMotion and vMotion migrations for Shared Disks configurations are not supported and fails when the migration is attempted. For more information, see Troubleshooting migration compatibility error: Device is a SCSI controller engaged in bus-sharing (1003797).
  • A Microsoft cluster consisting of both physical Windows server nodes and virtual machine nodes is supported. For more information, see the Cluster Physical and Virtual Machines section in the Setup for Failover Clustering and Microsoft Cluster Service Guide.
  • Microsoft SQL Server AlwaysOn Failover Clustering Instance (FCI) is supported on VMware vSphere under these conditions:
     
    • If the FCI nodes are hosted on separate ESXi hosts ( Cluster across Boxes configuration), the shared disks presented to the FCI nodes must be configured as Raw Device Mapping (RDM) disks attached to virtual SCSI controllers in physical compatibility mode.
    • In addition, DRS Anti-Affinity rules should be used to keep the virtual machines participating in this FCI configuration separated onto different ESXi hosts at all times.
    • vMotion operation is not supported for this type of FCI configuration on vSphere 4.x
       
  • vMotion of legacy SQL clustering options (Microsoft Clustering Service) in SQL Server versions prior to SQL Server 2012 has not been tested or validated by VMware. Consequently, VMware does not support vMotion of virtual machines configured with MSCS.
  • Storage vMotion is not supported with physical mode RDM in any version of vSphere.
To avoid unnecessary cluster node failovers due to system disk I/O latency, virtual disks must be created using the EagerZeroedThick format on VMFS volumes only, regardless of the underlying protocol.

Note: NFS is not a supported storage protocol with Microsoft Clustering.

Commonly used Microsoft clustering solutions
 
These are common Microsoft clustering solutions used by VMware users in virtual machines:
  • Microsoft Clustering Services: MSCS or Windows Failover Clustering is a clustering function that provides failover and availability at the operating system level. Commonly clustered applications include:
     
    • Microsoft Exchange Server:Microsoft SQL Server
    • File and print services
    • Custom applications
       
  • Microsoft Network Load Balance (I/O Load Balance): Microsoft Network Load Balance (NLB) is suited for stateless applications or Tier 1 of multi-tiered applications, such as web servers providing a front end for back end database and application servers. A physical alternative is an appliance, such as those available from F5.
Note: Sharing RDMs between virtual machines without a clustering solution is not supported.
 
VMware vSphere support for running Microsoft clustered configurations
 
This table outlines VMware vSphere support for running Microsoft clustered configurations:
 
Clustering
Solution
Support
Status
Clustering VersionvSphere
Version
Notes
MSCS with
shared disk
SupportedWindows Server 2003 1
Windows Server 2008
Windows Server 2008 R2
Windows Server 2012 2
4.xSee additional considerations
Network
Load Balance
SupportedWindows Server 2003 SP2
Windows Server 2008
Windows 2008 R2
Windows Server 2012
4.x 
SQL clusteringSupportedWindows Server 20031
Windows Server 2008
Windows 2008 R2
Windows Server 2012 2
4.xSee additional considerations
SQL AlwaysOn
Availability
Group (Non-shared Disks)
SupportedWindows Server 2008 SP2 or higher
Windows Server 2008 R2 SP1 or higher
Windows Server 2012
4.x 
Exchange
Single copy
cluster
SupportedExchange 20031
Exchange 2007
4.xSee additional considerations
Exchange CCRSupportedWindows 20031
Windows 2008 SP1 or higher
Exchange 2007 SP1 or higher
4.x 
Exchange DAGSupportedWindows 2008 SP2 or higher
Windows 2008 R2 or higher
Windows Server 2012
Exchange 2010
Exchange 2013
4.x 

Table 2 Notes:
  1. This table lists the support status by VMware on vSphere. Check with your vendor as the status of third-party software vendor support may differ. For example, while VMware supports configurations using MSCS on clustered Windows Server 2003 virtual machines, Microsoft does not support it. The same applies for the support status of the operating system version. Support for software that has reached end-of-life may be limited or non-existent depending on the life cycle policies of the respective software vendor. VMware advises against using end-of-life products in production environments.
  2. Supported only with in-guest SMB and in-guest iSCSI.
  3. In-guest clustering solutions that do not use a shared-disk configuration, such as SQL Mirroring, SQL Server AlwaysOn Availability Group (Non-shared Disk), and Exchange Database Availability Group (DAG) do not require explicit support statements from VMware except when the system disk's VMDK is located on NFS datastore which is not supported.
Additional notes:
  • System disk ( C: drive) virtual disks can be on local VMFS or SAN-based VMFS datastores only, regardless of the underlying protocol. System disk virtual disks must be created with the EagerZeroedThick format. For more information, see the Setup for Failover Clustering and Microsoft Cluster Service guide for ESXi/ESX 4.x.
  • In Windows Server 2012, cluster validation completes with this warning: Validate Storage Spaces Persistent Reservation. You can safely ignore this warning.
For support information on Microsoft clustering for MSCS, SQL, and Exchange, go to the Windows Server Catalog and select the appropriate dropdown.

Note: The preceding link was correct as of March 31, 2015. If you find the link is broken, provide feedback and a VMware employee will update the link.

Windows Server 2012 failover clustering is not supported with ESXi-provided shared storage (such as RDMs or virtual disks) in vSphere 4.x.

For more information, see these Microsoft Knowledge Base articles:
Note: The preceding links were correct as of February 11, 2014. If you find a link is broken, provide feedback and a VMware employee will update the link.

Consideration for shared storage clustering
 
Storage protocols
  • Fibre Channel: In vSphere 4.x, configurations using shared storage for Quorum and/or Data must be on Fibre Channel (FC) based RDMs (physical mode for cluster across boxes CAB , virtual mode for cluster in a box CIB ). RDMs on storage other than FC (such as NFS or iSCSI) are not supported in vSphere 4.x. Virtual disk based shared storage is supported with CIB configurations only and must be created using the EagerZeroedThick option on VMFS datastores.
  • Native iSCSI (not in the guest OS): VMware does not support the use of ESXi/ESX host iSCSI initiators, also known as native iSCSI (hardware or software) with MSCS in vSphere 4.x.
  • In-guest iSCSI software initiators: VMware fully supports a configuration of MSCS using in-guest iSCSI initiators, provided that all other configuration meets the documented and supported MSCS configuration. Using this configuration in VMware virtual machines is relatively similar to using it in physical environments. vMotion has not been tested by VMware with this configuration.
  • In-guest SMB (Server Message Block) protocol: VMware fully supports a configuration of MSCS using in-guest SMB, provided that all other configuration meets the documented and supported MSCS configuration. Using this configuration in VMware virtual machines is relatively similar to using it in physical environments. vMotion has not been tested by VMware with this configuration.
  • FCoE: FCoE is not supported in vSphere 4.x.
Virtual SCSI adapters

Shared storage must be attached to a dedicated virtual SCSI adapter in the clustered virtual machine. For example, if the system disk (drive C: ) is attached to SCSI0:0 , the first shared disk would be attached to SCSI1:0 , and the data disk attached to SCSI1:1.

The shared storage SCSI adapter for Windows Server 2008 and later must be the LSI Logic SAS type, while earlier Windows versions must use the LSI Logic Parallel type. (Paravirtual SCSI Controllers are not supported for configurations on vSphere 4.x).

Disk configurations
  • RDM: Configurations using shared storage for Quorum and/or Data must be on Fibre Channel (FC) based RDMs (physical mode for cluster across boxes CAB , virtual mode for cluster in a box CIB ). RDMs on storage other than FC (iSCSI and FCoE) are not suppored.
  • VMFS: Virtual disks used as shared storage for clustered virtual machines must reside on VMFS datastores and must be created using the EagerZeroedThick option. This can be done using the vmkfstools command from the console, the vSphere CLI, or from the user interface. For more information, see the Setup for Failover Clustering and Microsoft Cluster Service guide for ESXi/ESX 4.x.

    To create EagerZeroedThick storage with the user interface:
     
    1. Log in to the console of the host or launch the VMware vSphere CLI.
    2. For example, to create a 10 GB file in datastore1 named myVMData.vmdk , run the command:
       
      • Using the console:

        vmkfstools –d eagerzeroedthick –c 10g /vmfs/volumes/datastore1/myVM/myVMData.vmdk

        Note: Replace 10g with the desired size.
         
      • Using the vSphere CLI:

        vmkfstools.pl –-server ESXHost –-username username --password passwd –d eagerzeroedthick –c 10g /vmfs/volumes/datastore1/myVM/myVMData.vmdk
         
    To create EagerZeroedThick storage with the user interface:
     
    1. Using the vSphere Client, select the virtual machine for which you want to create the new virtual disk.
    2. Right-click the virtual machine and click Edit Settings.
    3. From the virtual machine properties dialog box, click Add to add new hardware.
    4. In the Add Hardware dialog box, select Hard Disk from the device list.
    5. Select Create a new virtual disk and click Next.
    6. Select the disk size you want to create.
    7. Select the datastore with the virtual machine or select a different datastore by clicking Specify a datastore and browsing to find the desired datastore.
    8. To create an EagerZeroedThick disk, select Support clustering features such as Fault Tolerance.

      Note: Step 8 must be the last configuration step. Changes to datastores after selecting Support clustering features such as Fault Tolerance cause it to become deselected.
       
    9. Complete the wizard to create the virtual disk.
Non-shared storage clustering

Non-shared storage clustering refers to configurations where no shared storage is required to store the application's data or quorum information. Data is replicated to other cluster nodes (for example, CCR) or distributed among the nodes (for example, DAG).

These configurations do not require additional VMware considerations regarding a specific storage protocol or number of nodes, and can be deployed on virtual in the same way as physical.

Notes:
HA/DRS specific configuration for clustered virtual machines
 
Affinity/Anti-affinity rules

For virtual machines in a cluster, you must create virtual machine to virtual machine affinity or anti-affinity rules. Virtual machine to virtual machine affinity rules specify which virtual machines should be kept together on the same host (for example, a cluster of MSCS virtual machines on one physical host). Virtual machine to virtual machine anti-affinity rules specify which virtual machines should be kept apart on different physical hosts (for example, a cluster of MSCS virtual machines across physical hosts).

For a cluster of virtual machines on one physical host, use affinity rules. For a cluster of virtual machines across physical hosts, use anti-affinity rules. For more information, see the Setup for Failover Clustering and Microsoft Cluster Service guide for ESXi/ESX 4.x.

To configure affinity or anti-affinity rules:
  1. In the vSphere Client, right-click the cluster in the inventory and click Edit Settings.
  2. In the left pane of the Cluster Settings dialog under VMware DRS, click Rules.
  3. Click Add.
  4. In the Rule dialog, enter a name for the rule.
  5. From the Type dropdown, select a rule:
     
    • For a cluster of virtual machines on one physical host, select Keep Virtual Machines Together.
    • For a cluster of virtual machines across physical hosts, select Separate Virtual Machines.
       
  6. Click Add.
  7. Select the two virtual machines to which the rule applies and click OK.
Multipathing configuration Path Selection Policy (PSP)

Round Robin PSP is not supported for LUNs mapped by RDMs used with shared storage clustering in vSphere 4.x. If you choose to use Round Robin PSP with your storage arrays, or if the vSphere version in use defaults to Round Robin PSP for the array in use, you may change the PSP claiming the RDM LUNs to another PSP. For more information, see Changing a LUN to use a different Path Selection Policy (PSP) (1036189).

With native multipathing (NMP) in vSphere 4.x, clustering is not supported when the path policy is set to Round Robin. For more information, see vSphere MSCS Setup Limitations in the Setup for Failover Clustering and Microsoft Cluster Service guide for ESXi/ESX 4.x.

Path Selection Policy (PSP) using third-party Multipathing Plug-ins (MPPs)

N+1 cluster configuration occurs when cluster nodes on physical machines are backed by nodes in virtual machines (that is, one node in each cluster nodes pair is in a virtual machine). In this configuration, the physical node cannot be configured with multipathing software. For more information, see your third-party vendor's best practices and support. Failover Clustering and Microsoft Cluster Service setup guides For more information, see the setup guide for ESXi/ESX 4.x.

Additional Information

For other vSphere versions, see:
Microsoft Cluster Service (MSCS) support on ESXi/ESX
ESXi/ESX hosts with visibility to RDM LUNs being used by MSCS nodes with RDMs may take a long time to start or during LUN rescan
Guidelines for Microsoft Clustering on vSphere
VMware vSphere 4.x 上的 Microsoft Windows Server 故障切换群集:支持的配置指南
VMware vSphere 4.x での Microsoft Windows Server フェイルオーバ クラスタ:サポートされる構成のためのガイドライン