VMware Cloud Foundations vSAN Health status failed after manually adding hosts
search cancel

VMware Cloud Foundations vSAN Health status failed after manually adding hosts

book

Article ID: 326837

calendar_today

Updated On:

Products

VMware Cloud Foundation

Issue/Introduction

 


Symptoms:
vSAN health status failed after manually adding hosts to VMware Cloud Foundations workload domain.
 
In the example, screenshot provided note the Cluster failure: Advanced Virtual SAN configuration in sync.
 
You will see the exact miss-configuration with the VSAN.goto11 and Net.TcpipHeapMax settings.
 
 
 


Environment

VMware Cloud Foundation 2.1.x

Resolution

This issue appears after imaging hosts and manually commissioning them in aVMware Cloud Foundations workload domain. For more information, see the Adding and Replacing Hosts section in the Administering VMware Cloud Foundation Guide.
 
It appears that during initial deployment the hosts are configured to have up to 64 hosts in the vSAN cluster. This documented procedure to image hosts does not set these values during the procedure.
 
To reconfigure new hosts:
  1. Open an SSH session to the first ESXi host.
  2. Configure the advanced settings for increased node support on each host in the cluster:

esxcli system settings advanced set -o /VSAN/goto11 -i 1
esecfg-advcfg -g /VSAN/goto11 (To confirm the change)


 

  1. Increase the TCP/IP heap size:

esxcli system settings advanced set –o /Net/TcpipHeapMax –i 1024

  1. Put the ESXi host in maintenance mode and restart.

Note: Use the Ensure Accessibility option.

  1. Exit maintenance mode.
  2. Repeat steps 1-6 for each host in the cluster.
  3. Verify the vSAN health.

For more information, see Creating a vSAN 6.x cluster with up to 64 hosts (2110081), defaults for VMware Cloud Foundations hosts.


Additional Information

Creating a vSAN 6.x cluster with up to 64 hosts

Impact/Risks:

Note: Any configuration changes, modification, and reboots of hosts can result in downtime of VMs.