Implementing vSphere Metro Storage Cluster (vMSC) using Hewlett Packard Enterprise (HPE) Primera or Alletra 9000 Peer Persistence
search cancel

Implementing vSphere Metro Storage Cluster (vMSC) using Hewlett Packard Enterprise (HPE) Primera or Alletra 9000 Peer Persistence


Article ID: 323013


Updated On:


VMware vSphere ESXi


This article provides information about deploying a vSphere Metro Storage Cluster (vMSC) across two data centers or sites using HPE Primera  or Alletra 9000 storage .


What is vMSC?

vSphere Metro Storage Cluster (vMSC) is a tested and supported configuration for stretched storage cluster architectures. A vMSC configuration is designed to maintain data availability beyond a single physical or logical site. All supported storage devices are listed on either the VMware Storage Compatibility Guide or the Partner Verified and Supported Products (PVSP) listings.

What is HPE Primera Peer Persistence?

HPE Primera Peer Persistence is an extension of HPE Primera and Alletra 9000 Remote Copy software
and HPE Primera and Alletra 9000 OS that enables a pair of HPE Primera storage systems, located up to metropolitan distances apart, to act as peers to each other and present a nearly continuous storage system to hosts connected to them. Volumes presented to hosts are replicated across the pair of arrays and kept in sync. Each pair of replicated and synchronized volumes across each array share the same WWN and appear as the same volume to the hosts. Taking advantage of Asymmetric Logical Unit Access (ALUA) capabilities that allow paths to a SCSI device to be marked as having different characteristics, hosts connect to volumes on one array via active paths, and connect to replicated volumes on the other array via standby paths. ALUA Path status and host availability to the volumes is controlled by Peer Persistence software. This capability allows customers to configure a high-availability (HA) solution between two sites or data centers where switchover, failover and switchback of access to the volumes across arrays remains transparent to the hosts and applications running on those hosts.

HPE Quorum Witness

The HPE Quorum Witness is a component provisioned as application software that is typically installed on a VM and deployed at a third site. The HPE Quorum Witness, along with the two HPE Primera storage systems, forms a three part quorum system. This quorum system allows monitoring of the status of both the HPE Primera or Alletra 9000 Storage systems and the storage site inter-links. A number of site and inter-link failure scenarios can be recognized by this three part quorum system, and appropriate failover actions implemented. In the event of a disaster that may bring either one of the storage systems or sites down, and in conjunction with Peer Persistence software, a failover to the surviving Primera storage system is automatically initiated. During this failover operation, replicated volumes on the remaining storage system are made active. The host paths to those volumes are also made active, thereby ensuring that hosts can continue to access their volumes without disruption or outage. Communication between the three sites for quorum is via the Quorum Witness IP and the service management IP’s of the two HPE Primera or Alletra 9000 Storage systems. HPE Quorum Witness does not actively participate in data storage and a failure or removal of the HPE Quorum Witness from an otherwise functioning environment will have no impact. The HPE Quorum Witness only comes into play when one site or the ISL has failed or if two quorum members have failed simultaneously.

Configuration Requirements

These requirements must be satisfied to support a vMSC configuration with HPE Primera or Alletra 9000.
  • VMware ESXi 6.0 or later, Metro Storage Cluster configured for uniform host access per VMware requirements and best practices. 
  • HPE Primera or Alletra 9000 Storage arrays configured for Peer Persistence with Automated Transparent Failover.
    • HPE Quorum Witness application software must be installed on a supported OS as a physical or virtual machine at a third site. 
    • vSphere vCenter Server connected to ESXi hosts in both data centers. 
    • Maximum round trip latency on the storage array inter-link network between sites should not exceed 10ms RTT. 
    • Host-array I/O path connectivity, and array inter-link connectivity per current HPE Primera support or Alletra 9000. 
    • Any IP subnet used by the virtual machine must be accessible by all ESXi hosts in all data centers within the Metro Storage Cluster.                                                                                                       
Note: Updates to the maximum round trip latency specification, Quorum Witness version support, host OS I/O connectivity support, and array inter-link connectivity support are published in the HPE Primera and Alletra 9000 Support Matrices and HPE Primera  and Alletra 9000 Peer Persistence Host OS Support Matrix on the HPE Single Point of Connectivity Knowledge (SPOCK) site.

Note: This solution is supported for use with VMware Cloud Foundation (VCF) for primary workload domain or supplemental storage to provide the best resiliency possible. When used with VCF currently only VMFS Fiber Channel connectivity is supported as Peer Persistence is not yet supported with VMware vVols. 
VMFS with iSCSI can be connected manually as supplemental storage to a workload domain.

Solution Overview

A test qualified VMware Metro Storage Cluster using HPE Primera or Alletra 9000 Storage is supported in accordance with the VMware description of a uniform access vMSC configuration. Particular to the uniform access configuration, host data path connections at each site are cross-connected to the peer data center site storage array. ESXi hosts access volumes in the data center local array via active paths. Connectivity to standby peer volumes on the distance array is maintained in standby mode until such time as a failover or switchover. In concert with HPE Primera Peer Persistence, HPE Quorum Witness and with Automated Transparent Failover (ATF) enabled, a minimally disruptive switchover or failover of volume access across sites can be achieved.

For example, in case of an array failure on one site (site 1):
  1. Loss of quorum is detected by the Quorum Witness and the surviving storage array at site 2. 
  2. Peer Persistence software on array at site 2 makes the peer volumes and the cross-connected paths from the hosts on site 1 to the array on site 2, active. 
  3. Site 1 hosts, virtual machines, and applications access peer volumes on site 2 and continue normal operation.                                                                                                                                                           
Note: The failover is automated with Peer Persistence and ATF, but transparent failback after fault correction is a manual process. 

This diagram depicts a high level overview:

Note: Peer Persistence between HPE Primera and HPE Alletra 9000 is supported

Sample tested scenarios


HPE Primera Storage System Behaviour

VMware HA Behaviour

Single Array-Host Path Failure

Hosts use alternate paths to maintain volume access.

No effect observed

Single Array Node Failure

Hosts use alternate paths to the surviving array node(s) at the site and maintain volume access.

No effect observed

Single Storage Inter-Site Link Failure

No effect. Inter-site connectivity is maintained by alternate link

No effect observed

All Storage Inter-Site Links fail

Peer volume synchronization is disabled and Automated Transparent Failover is disabled.

No effect observed

Quorum Witness Failure

Automated Transparent Failover is disabled

No effect observed

Simultaneous Quorum Witness and Storage Inter-site Links Fail

Peer volume synchronization is disabled and Automated Transparent Failover is disabled.

No effect observed

Single Site Storage Array Failure

Automated Failover occurs and Peer volumes and paths are made active on the surviving site.

No effect observed

Complete Site Failure

Automated Failover occurs and Peer volumes and paths are made active on the surviving site.

Virtual machines are restarted on ESXi hosts on the surviving site