This article provides information about the support of vMotion over Distance operations over VPLEX Metro using Distributed Virtual Volume. It also specifies the VMware ESXi/ESX host and VPLEX configuration details and expected behavior under different operational scenarios.
Note: See Best practice documents in this article for additional requirements for VPLEX Distributed Virtual Volume.
Case | Support |
Simultaneous access to a shared Distributed Virtual Volume from two separate ESXi/ESX clusters
|
Supported
|
vMotion between a host in ESXi/ESX cluster 1 / data center 1 to a host in ESXi/ESX cluster 2 / data center 2 leveraging the shared Distributed Virtual Volume
|
Supported
|
Scenario
|
VPLEX Behavior
|
Impact on ESXi/ESX hosts
|
---|---|---|
Loss of ESXi/ESX server at source
|
No impact
|
Virtual machines that were running on the local ESXi/ESX server can be registered and restarted on the destination ESXi/ESX server.
|
Loss of ESXi/ESX server at destination
|
No impact
|
Virtual machines that were running on the remote ESXi/ESX server can be registered and restarted on the source ESXi/ESX server.
|
VPLEX cluster failure at source
|
I/O can be resumed on the VPLEX cluster at destination
|
Virtual machines fail on ESXi/ESX servers at both source and destination. They can be restarted on the destination ESXi/ESX server.
|
VPLEX cluster failure at destination
|
VPLEX continues to process I/O at the source. When the VPLEX cluster at destination data center is back online, any changes written during the outage are replicated to the destination's Distributed volume mirror leg.
|
No impact to source ESXi/ESX server. Virtual machines on destination ESXi/ESX server fail. They can be restarted on the source ESXi/ESX server.
|
Total Data center 1 failure (Total site failure)
|
I/O can be resumed on the VPLEX cluster at destination data center
|
Virtual machines fail at destination ESXi/ESX server. Virtual machines at source and destination ESXi/ESX servers can be restarted on the destination ESXi/ESX server.
|
Total Data center 2 failure (Total site failure)
|
VPLEX continues to process I/O at the source data center. When the VPLEX cluster at destination data center is back online, any changes written during the outage are replicated to the destination's Distributed volume mirror leg.
|
vMotion fails with no interruption to source ESXi/ESX. Virtual machines at destination ESXi/ESX server can be restarted on the source ESXi/ESX server with manual intervention.
No impact on source ESXi/ESX server. |
Inter-site network failure (Network Partition)
|
VPLEX winner cluster at Data center 1 continues to function
|
Destination ESXi/ESX server is not able to perform I/O to VPLEX Distributed volumes. Manual intervention required to resume I/O at destination data center.
No impact on source ESXi/ESX server. |
VPLEX inter-cluster communication link failure (Network Partition) |
VPLEX winner cluster at Data center 1 continues to function | Destination ESXi/ESX server is not able to perform I/O to VPLEX Distributed Virtual Volumes. Manual intervention is required to resume I/O at destination. |
VPLEX director failure |
No impact. I/O continues on the remaining directors.
|
No impact. Under some conditions, a director failure may cause a vMotion that is already in progress to halt. However, the virtual machine continues to run at source and the vMotion can be re-initiated and completed as expected. |
VPLEX management server failure | No impact | No impact |
Redundant front-end path failure | No impact | No impact |
Redundant back-end path failure | No impact | No impact |
Back-end array failure on Data center 1 | No impact. VPLEX automatically starts rebuild when the failed Back-end array is back. The rebuild may affect the host I/O response. The rebuild parameters can be adjusted to minimize the rebuild effect on host I/O response time. Contact EMC for more details. | ESXi/ESX may observe slower I/O responses over Distributed Virtual Volume due to rebuilds. The rebuild parameters can be adjusted to minimize the rebuild effect on host I/O response time. Contact EMC for more details. |
Back-end array failure on Data center 2 | No impact. VPLEX automatically starts rebuild once the failed Back-end array is back. The rebuild may affect the host I/O response. The rebuild parameters can be adjusted to minimize the rebuild effect on host I/O response time. Contact EMC for more details. | ESXi/ESX may observe slower I/O responses over Distributed Virtual Volume due to rebuilds. The rebuild parameters can be adjusted to minimize the rebuild effect on host I/O response time. Contact EMC for more details. |
Note: The preceding links were correct as of February 13, 2015. If you find a link is broken, provide a feedback and a VMware employee will update the link.
Term | Definition |
Distributed Virtual Volume | A VPLEX virtual volume with complete, synchronized, copies of data (mirrors), exposed through 2 geographically separated VPLEX clusters. Distributed Virtual Volumes can be simultaneously accessed by servers at distant data centers thus allowing vMotion over Distance. |
Winner Cluster | On creation of a VPLEX Distributed Virtual Volume, a winner VPLEX cluster must be assigned. In the event of VPLEX inter-cluster communication failure, the winner VPLEX cluster continues to service I/O destined to the VPLEX Distributed Virtual Volume. The loser VPLEX cluster does not service I/O received on the Distributed Virtual Volume. This VPLEX behavior addresses split-brain issues and data corruption in case of VPLEX inter-cluster network partition. |