VMware vSphere Metro Storage Cluster (vMSC) with Dell PowerMax and Dell EMC VMAX SRDF/Metro (Partner Verified and Supported)
search cancel

VMware vSphere Metro Storage Cluster (vMSC) with Dell PowerMax and Dell EMC VMAX SRDF/Metro (Partner Verified and Supported)

book

Article ID: 323113

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

This article provides information about deploying a Metro Storage Cluster across two data centers using Dell PowerMax or Dell EMC VMAX SRDF/Metro with VMware vSphere.

Note: The Partner Verified and Supported Products (PVSP) policy implies that the solution is not directly supported by VMware. For issues with this configuration, contact Dell directly. For more information on engaging partners with VMware, see the Support Workflow. It is the partner's responsibility to verify that the configuration functions with future vSphere major and minor releases, as VMware does not guarantee that compatibility with future releases is maintained.

Disclaimer: The partner products referenced in this article are hardware devices or software that are developed and supported by stated partners. Use of these products is also governed by the end user license agreements of the partners. You must obtain the application, support and licensing for using these products from the partners. For more information, see Dell Support.

Note: The preceding link was correct as of May 2022. If you find the link is broken, provide feedback and a VMware employee will update the link.


Environment

VMware vSphere ESXi 5.5
VMware vSphere ESXi 6.5
VMware vSphere ESXi 6.7
VMware vSphere ESXi 6.0
VMware vSphere ESXi 7.0.0

Resolution

Dell EMC VMAX All Flash and Dell PowerMax is a family of mission-critical enterprise storage systems, engineered from the ground up to realize the full potential of flash technology. They combine high scale, low latency and rich data services that empower software defined data centers to run the most demanding customer workloads. The PowerMax automates service level delivery securely across all IT infrastructures. It is the trusted data services and cloud broker directing workloads to the most optimal architecture based on application service level objectives.

The SRDF (Symmetrix Remote Data Facility) family of software is the gold standard for remote replication in mission critical environments. Built for the industry-leading high-end VMAX, VMAX All Flash, and PowerMax hardware architecture, the SRDF family of solutions is trusted for disaster recovery and business continuity.

SRDF/Metro enables a high-availability solution at Metro distances with active-active read/write capabilities at both sites.

For details of supported configurations refer to the E-Lab Interoperability Navigator at Dell EMC E-Lab Interoperability Navigator.
 

Minimum Requirements:

These are the minimum system requirements for a vMSC solution with SRDF/Metro:
  • Dell EMC VMAX3 or Dell EMC VMAX All Flash with HYPERMAX OS 5977.691.684 (SR Q32015), Solutions Enabler 8.1/Unisphere 8.1; or PowerMax with PowerMaxOS 5978.144.144, Solutions Enabler 9.0/Unisphere 9.0
  • SRDF/Metro license for VMAX3, VMAX All Flash, or PowerMax array.
  • The arrays should use Fiber Channel (FC) or iSCSI (iSCSI support available only with HYPERMAX OS 5977.810.784 or and PowerMaxOS 5978.144.144 and higher) to connect to ESXi operating system. There is no support for NVMeoF.
  • ESXi 6.5, 6.7 or 7.0
  • PowerPath/VE or NMP - SRDF/Metro supports vSphere 6.5 with PowerPath/VE 6 and higher. Starting with vSphere 6.5 U1, NMP is supported.

Notes:
  • Dell EMC requires an approved RPQ (Request for Product Qualification) for ESXi operating system support for the initial HYPERMAX OS release 5977.691.684 only. HYPERMAX OS 5977.810.784 or higher does not require an RPQ to support ESXi. ESXi support is built into this release. All PowerMaxOS versions are supported. For details of supported configurations refer to the E- Lab Interoperability Navigator at Dell EMC E-Lab Interoperability Navigator .
Solution Overview

In traditional SRDF, R1 devices are Read/Write accessible. R2 devices are Read Only/ Write Disabled. In the SRDF/Metro configurations:
  • R2 devices are Read/Write accessible to hosts.
  • Hosts can read/write to both the R1 and R2 side of the device pair.
  • R2 devices assume the same external device identity (geometry, device WWN) as their R1.
This shared identity causes the R1 and R2 devices to appear to hosts as a single distributed device across the two arrays.
SRDF/Metro can be deployed with either a single multi-pathed host or with a clustered host environment as shown in the Figure 1. For details of supported configurations refer to the E-Lab Interoperability Navigator at Dell EMC E-Lab Interoperability Navigator.



Figure 1 SRDF/Metro Configurations

For single host configurations, host IOs are issued by a single host. Multi-pathing software directs parallel reads and writes to each array.

For clustered host configurations, host IOs can be issued by multiple hosts accessing both sides of the SRDF device pair. Each cluster node has dedicated access to an individual array. This is a non-uniform or stretch cluster setup. A clustered setup can also be configured so each host has access to both arrays. This is a uniform cluster setup.

In both single host and clustered configurations, writes to the R1 or R2 devices are synchronously copied to the paired device. Write conflicts are resolved by the SRDF/ Metro software to maintain consistent images on the SRDF device pairs. The R1 device and its paired R2 device appear to the host as a single virtualized device.

SRDF/Metro Witness:

In the event of link or other failures, SRDF/Metro uses one of two options to determine which side of a device pair remains accessible to the host. These methods are:

  • Bias option: Device pairs for SRDF/Metro are created with an attribute called - use_bias. By default, the createpair operation sets the bias to the R1 side of the pair. That is, if the device pair becomes Not Ready (NR) on the RDF link, the R1 (bias side) remains accessible to the host(s), and the R2 (non-bias side) is inaccessible to the host(s).
    When all RDF device pairs in the RDF group have reached the ActiveActive or ActiveBias pair state, bias can be changed.
  • Witness array option: In the event of a failure, a witness is able to determine which side, the R1 or R2, should survive. There are two types of witnesses, physical and virtual.
    • Physical: PowerMaxOS, HYPERMAX OS, or Enginuity on a third array monitors SRDF/ Metro, determines the type of failure, and uses the information to choose one side of the device pair to remain R/W accessible to the host.
    • Virtual: A virtual appliance or vWitness software running on a supported operating system, monitors SRDF/ Metro, determines the type of failure, and uses the information to choose one side of the device pair to remain R/W accessible to the host.
The Witness option is the default and recommended.

Note: Any of the following can be used as a Physical Witness array:
  • VMAX3 running HYPERMAX OS 5977.691.684 or higher.
  • VMAX All Flash running HYPERMAX OS 5977.691.684 or higher.
  • VMAX running Enginuity 5876.286.194 with the required Witness Package.
  • PowerMax running PowerMaxOS 5978.144.144. or higher
Multiple witnesses are supported to provide redundancy. They can be physcical or virtual, or a combination of both.

    For more details, please refer to the latest PowerMax or VMAX Product Guide on Dell EMC Support

    SRDF/Metro Cluster Support

    SRDF/Metro with HYPERMAX OS release 5977.810.784 or later introduces full cluster support with ESXi native clusters over PowerPath/VE or NMP (native multi-path) with various applications without any RPQ. Both uniform and non-uniform clusters are supported. For a uniform configuration the best practice is to use PowerPath/VE or NMP Round Robin(RR) policy with PSP(path selection plugin). See the recommendation section for important details concerning the NMP configuration.

    For details of supported configurations refer to the E-Lab Interoperability Navigator at Dell EMC E-Lab Interoperability Navigator.

    Starting with Dell EMC Unisphere for PowerMax and Solutions Enabler 9.1 running 5978.444.44 or later, online device expansion of devices taking part in SRDF/Metro (Active) sessions is supported.
    This feature provides the following functionality: 
    • Adds support for devices in SRDF/Metro Active or Suspended pair states
    • Expansion will not impact read/write operation performance to associated devices or applications 
    • Supports SRDF/Metro R1/R2 topology with single a command/operation 
    For more detailed information refer Dell EMC PowerMax and VMAX All Flash: SRDF/Metro Overview and Best Practices

    SRDF/Metro VAAI Support

    SRDF/Metro with HYPERMAX OS 5977.945.890 and higher and all PowerMaxOS support all VAAI commands.  HYPERMAX OS release 5977.810.784 supports all VAAI commands (ATS, UNMAP,WRITE SAME) except for XCOPY.

    SRDF/Metro Connectivity with ESXi

    SRDF/Metro with HYPERMAX OS release 5977.810.784 or later and all PowerMaxOS support both Fibre Channel (FC) and iSCSI protocols for connecting with ESXi. NVMeoF is not supported.

    For details of supported configurations refer to the E-Lab Interoperability Navigator at Dell EMC E-Lab Interoperability Navigator.

    Recommendations and limitations
    • Dell recommends using a non-uniform cluster configuration to avoid issues with latency and write conflicts, however both non-uniform and uniform are supported.
    • Dell recommends masking the R2 devices to the ESXi hosts only after SRDF/Metro reaches ActiveActive or ActiveBias state, whether utilizing a uniform or non-uniform configuration. The user can employ the rescan command to detect the new path OR the paths will be detected automatically by PowerPath/VE or NMP. There might be some delay in automatic detection, which is a NMP dependent behavior; this can be improved by changing the path polling time to 30 (seconds) with tunable Disk.PathEvalTime. For more information, see Changing the polling time for datastore paths (1004378) .
    • Dell recommends the use of PowerPath/VE over NMP for better error handling and congestion management, though both are supported.
    • For a uniform cluster configuration Dell recommends using PowerPath/VE with Autostandby so that local paths are always used unless there is a failure. If using NMP, however, the policy, or type for Round Robin should be set to the default of iops=1000 and not iops=1 as is the normal recommendation. When running vSphere 6.7 U1 or higher, Latency Round Robin  should be utilized with NMP instead.
    • Multiple witnesses are recommended. It is best for the witnesses to be in a different fault domain.
    • Dell supports SRDF/Metro with VMware SRM for both two and three site configurations. See Implementing Dell EMC SRDF SRA with VMware SRM for more information.
    • Dell supports VMware three-site configurations with SRDF/Metro known as MetroDR, however not with VMware SRM.
    For complete details on recommendations and limitations, please refer to the latest Release Notes and Product Guide on Dell EMC Support

    Additional Information

    For more information on configuration guidelines and deployment best practices, see Dell EMC Support.

    For more in depth information of SRDF/Metro, please refer to the SRDF/Metro technical note and the whitepaper Best Practices for Using Dell EMC SRDF/Metro in a VMware vSphere Metro Storage Cluster
    Note: You must be registered with Dell EMC to view the preceding link.

    Note: The preceding link was correct as of May 2022.  If you find the link is broken, provide feedback and a VMware employee will update the link.