DataCore SANsymphony Metro Storage Solutions
search cancel

DataCore SANsymphony Metro Storage Solutions

book

Article ID: 312184

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

This document demonstrates how DataCore™ SANsymphony™, either installed as SAN or hyperconverged SDS solution, storage nodes should be configured in a Stretched Cluster to ensure that VMware virtual machines maintain continuous access to primary storage. In this configuration, ESXi hosts are configured in a VMware Metro Storage Cluster with a non-uniform highly available storage system and data at multiple sites. To do this, common failure scenarios are enacted in a test environment to verify the reaction to failure and confirm continuous access to storage.
This document is intended for VMware administrators to promote an understanding of stretched-cluster compatibility between VMware vSphere and DataCore SANsymphony. This article assumes that the reader is familiar with VMware vSphere, VMware vCenter Server, VMware vSphere High Availability (vSphere HA), VMware vSphere Distributed Resource Scheduler (vSphere DRS), VMware vSphere Storage DRS, and replication and storage clustering technology and terminology.

This solution is partner supported. For more information, see Partner Verified and Supported Products (PVSP).


Environment

VMware vSphere ESXi 6.0
VMware vSphere ESXi 7.0
VMware vSphere ESXi 6.7
VMware vSphere ESXi 5.5
VMware vSphere ESXi 6.5
VMware vSphere ESXi 5.1

Resolution

What is DataCore SANsymphony™ ?
DataCore Software-Defined Storage (SDS) solutions provide comprehensive and universal storage services that extend the capabilities of the storage devices or systems that are managed by SANsymphony software. The technical description will be limited to the synchronous mirroring feature and ALUA and its inter-operation with Native Multi-Pathing (NMP) since these features are most directly related to vSphere Metro Storage Cluster (vMSC).

As a software package, the solution can run on dedicated x86 servers or as a virtual machine (VM) on the hypervisor host. With SANsymphony software, the software presents any number of shared, highly available, multi-pathed SCSI (block) disks to the hypervisor clusters over conventional Fibre Channel (FC) or iSCSI (iSCSI) SAN networks. For iSCSI, both the native kernel iSCSI initiator and/or an iSCSI HBA may be used by the initiating host. The iSCSI target driver on the storage server follows SCSI specifications.
When running in a VM, the software presents any number of shared, highly available, multi-pathed iSCSI (block) disks to the local and remote hypervisor hosts over iSCSI networks. In either configuration, the logical diagram is the same. For a given SANsymphony disk, the two active software instances act as a pair of well behaved, logical disk controllers that present active/active block SCSI LUNs to the clustered hypervisor hosts.
SANsymphony Metro Storage Architecture
The block diagram above illustrates the configurations conceptually. Redundant physical and logical paths are assumed but not drawn for simplicity. Please refer to the SANsymphony  documents, VMware ESXi Configuration Guide and if appropriate, the SANsymphony Best Practice Guides for configuration details.
 
Configuration Requirements
  • An independent link between sites with a maximum round trip latency of five milliseconds will provide connectivity for management (TCP/IP).
  • Two additional independent links with a maximum round trip latency of five milliseconds will provide connectivity and redundancy for SCSI (iSCSI or Fibre Channel) connectivity for synchronous mirroring.
  • For management and vMotion traffic, the ESXi hosts in both datacenters must have a private network on the same IP subnet and broadcast domain. Preferably management and vMotion traffic are on separate networks.
  • Any IP subnet used by the virtual machine that resides on it must be accessible from ESXi hosts in both datacenters. This requirement is important so that clients accessing virtual machines running on ESXi hosts on both sides are able to function smoothly upon any VMware HA triggered virtual machine restart events.
  • The data storage locations, including the boot device used by the virtual machines, must be active and accessible from ESXi hosts in both datacenters.
  • The VMFS-5 or higher file system must be used and not upgraded from a previous VMFS file system version.
  • vCenter Server must be able to connect to ESXi hosts in both datacenters.
  • For VMs on vSphere Nodes separated by distance each Node should access a local datastore (mirrored VDisk) for VMs to access. ALUA and Round Robin should be selected, and the local SANsymphony Server Node should be specified as the Preferred Server for that mirrored VDisk. This will ensure sufficient performance. 
  • The VMware datastore for the virtual machines running in the ESXi Cluster are provisioned on shared virtual volumes if the vSphere environment is configured to use them.
  • When setting up a VMware Fault Tolerant or High Available Cluster and where virtual disks are to be shared between two or more of the ESXi Hosts, make sure that all host connections to any SANsymphony front-end (FE) port do not share any 'physical links' with the mirror (MR) connections between SANsymphony instances.
  • When using the ESXi software iSCSI initiator, do not configure VMKernel iSCSI port binding.
  • For more information, see the SANsymphony: The Host Server - VMware ESXi Configuration Guide.
       See Also:

Additional Information