Converting a 3-node vSAN Cluster to a 2-node vSAN Cluster
search cancel

Converting a 3-node vSAN Cluster to a 2-node vSAN Cluster

book

Article ID: 416191

calendar_today

Updated On:

Products

VMware vSAN

Issue/Introduction

This article provides guidance on how to convert an existing 3-node standard vSAN cluster into a 2-node vSAN cluster with a vSAN Witness Appliance.

  • The environment currently runs a 3-node vSAN cluster.
  • The goal is to repurpose one host and reconfigure the remaining two as a 2-node (ROBO) cluster.
  • Attempting to place a host into maintenance mode using “Evacuate all data to other hosts” fails with an error indicating insufficient resources or operation cannot complete.

 

Environment

VMware vSAN 7
VMware vSAN 8

Cause

The “Evacuate all data to other hosts” option requires vSAN to fully migrate all components from the host being placed into maintenance mode to other available hosts.
When removing a host from a 3-node cluster, only two data nodes remain — which is insufficient to maintain the same storage policy compliance for all objects (for example, FTT=1 RAID-1 requires three fault domains).
As a result, a full data evacuation cannot complete successfully.

Resolution

To convert a 3-node vSAN cluster into a 2-node vSAN cluster:

Prerequisites

Before starting:

  1. Ensure the cluster is healthy (no resyncing or degraded objects):
  2. Log in each ESXi host and check #esxcli vsan health cluster list
  3. Log in each ESXi host and check  #esxcli vsan resync summary get
  4. Verify that the vCenter Server managing the vSAN cluster is online.
  5. Deploy a vSAN Witness Appliance compatible with the vSAN version in use.
  6. Confirm network connectivity between the two data hosts and the Witness host.
  7. Verify that the vSAN license is valid and  supports a 2-node configuration (Stretched Cluster /ROBO/ or Standard).

Procedure

Step 1: Verify vSAN Cluster Health

  • From vSphere Client:
    Cluster > Monitor > vSAN > Skyline Health, ensure “What if the most consumed host fails”  and all other checks are green.

    ("vSAN Build Recommendation / vSAN release catalog up-to-date / vSAN HCL DB Auto Update" related warning can be safely ignored)
  • Ensure there are no active resync operations.

Step 2: Deploy and Configure vSAN Witness Appliance

  1. Deploy the vSAN Witness Appliance OVA.
  2. Connect the witness host to:
    • Management network for vCenter connectivity.
    • vSAN network for communication with the two data nodes.
  3. Add the witness host to vCenter (not inside the vSAN data node cluster).

Step 3: Place the Node to Be Removed in Maintenance Mode

  1. Select the host to be removed.
  2. Right-click the host → Enter Maintenance Mode.
  3. Select Ensure Accessibility as the data migration mode.

 Do not select “Evacuate all data to other hosts”. This will fail because only two nodes remain.

  1. Wait until the maintenance mode operation completes successfully.
  2. vSAN Cluster Health will show “vSAN object health / Stats DB object” related warning since there are only 2 node.

Step 4: : Remove the Host from the vSAN Cluster

  • Once the host is in maintenance mode:
    1. Remove all disk groups if still present (optional but recommended).
    2. Right-click the host → Remove from Cluster.
  • The cluster now contains two data nodes.

Step 5: Configure 2-Node vSAN Cluster

 

  1. In vSphere Client, navigate to:
    Cluster → Configure → vSAN → Fault Domains.

Click Configure STRETECHED CLUSTER.

  1. Configure fault domains.
  2. Select the Witness host.
  3. Complete the wizard to create the new 2-node + witness configuration.

Step 6: Verify Cluster Health and Compliance

  1. Confirm the vSAN cluster shows:
    • 2 data nodes and 1 witness.
    • Cluster Type: 2-Node Configuration.
  2. Check health status and waiting for the data resync to be finished.
  3. Review vSAN Skyline Health to ensure all checks are green.
    OR use CLI from the host to check health status using the below commands:
    esxcli vsan cluster get <== confirm cluster is showing all hosts 
    esxcli vsan health cluster list
  4. Verify VM storage policies show “Compliant” or reapply the default policy (FTT=1).

Step 7: (Optional) Decommission the Removed Host, open a case if help is needed.

If the removed ESXi host will no longer be part of vSAN:

  1. Log in via SSH and leave the vSAN cluster run the following command via CLI to confirm vSAN is no longer enabled:
    esxcli vsan cluster leave
  2. Remove any vSAN disk groups if still present:
    esxcli vsan storage remove -s <UUID>
  3. Disable the vSAN service on the vmk port if not needed:
    esxcli vsan network ipv4 remove -i vmkX

Verification

After completing the conversion:

  • vSAN Health → All Green
  • Cluster Configuration → 2 data nodes + 1 witness
  • Storage Policy → Compliant
  • Witness communication → Healthy

Additional Information

The following are helpful links and documents for the above task 

Document on  Deploying a vSAN Witness Appliance
Document on Two Node vSAN Deployments
KB on Permanently Decommissioning a node from a vSAN Cluster