Precheck :
-
It is highly recommended to have a full backup of the vCenter Server and there should not be any snapshots prior to this activity
-
-
All the VMs in the vSAN cluster must be set with a policy with protection FTT=1 ( ex. vSAN Default Storage Policy) or higher. All the VMs must be storage policy compliant
-
The vCenter Server must be running through Standard Switch to communicate with the external world and at least two ESXi hosts in the vSAN cluster should be configured with Standard Switch where the customer can migrate the vCenter Server on each of them and validate if the Web-Client is working in the normal manner.
-
If the ESXI host is not rebooted for a very long time ( more than 180 days ), it is advisable to reboot one host at a time in a rolling reboot manner to ensure all the state vmkerenel/hostd processes are cleaned and no new unforeseen hardware - software errors are detected during the activity. The host must be put in Maintenamce mode of Ensure Accessibility ( default Maintenamce Mode ) before reboot.
To enable EVC on a vSAN cluster when vCenter Server is installed on a virtual machine running in the cluster:
- Connect to vCenter Server using the vSphere Web Client.
- Right-click a data center in the inventory and select New Cluster.
- Type a name for the cluster.
- Expand EVC and select a baseline CPU feature set from the EVC mode drop-down
- Enable vSAN with all the same feature settings as the existing cluster
- SSH to all member hosts in the old vSAN cluster, invoke the below command to disable vSAN unicast update
- esxcfg-advcfg -s 1 /VSAN/IgnoreClusterMemberListUpdates
- Select 1st host in the cluster, migrate all powered-on VMs owned by it to other hosts in the cluster
- Put the host into Maintenance Mode with No data migration
- Right-click the ESXi host, select Connection, and then Disconnect.
- Drag and drop the ESXi host into the new EVC vSAN cluster
- Right-click on the host, select Connection, and then Connect.
- Exit from Maintenance Mode
- Using the VMware Host Client, directly connect to the ESXi host that is hosting the vCenter Server virtual machine.
- Right-click the vCenter Server virtual machine and click Edit Settings.
- Click the VM Options tab.
- Click General Options.
- Make a note of the host (vCenter virtual machine is running), location, and name of the virtual machine configuration file (.vmx) on the datastore. This information is required in step 18 and later.
- Power off the vCenter Server virtual machine.
- Connect to the host noted in step 13 using the Host Client.
- Right-click the vCenter Server virtual machine and click Unregister.
- Using the Host Client, connect directly to the ESXi host that is in the EVC cluster.
- Browse the datastore that contains the virtual machine configuration file for the vCenter Server virtual machine as noted in step 13.
- Right-click the virtual machine configuration file and click Register VM.
- Power on the vCenter Server virtual machine.
- Using the vSphere Web Client, connect to the vCenter Server.
- You now have vCenter Server running in a virtual machine on an ESXi host that is in an EVC cluster. All other virtual machines are running on ESXi hosts that are outside of the EVC cluster.
- To add the ESXi hosts that are outside the EVC vSAN cluster into the EVC vSAN cluster, move the virtual machines from the hosts.
- Note: You can attempt to migrate those virtual machines (while powered on) to an ESXi/ESX host that is already in the EVC vSAN cluster. If this migration fails (for example, due to the EVC baseline configuration), power off the virtual machines and then migrate them to an ESXi host in the EVC vSAN cluster.
- After all the virtual machines are moved from the ESXi host, right-click the host and click Disconnect to disconnect it from the vCenter Server Inventory.
- Drag and drop the disconnected host into the EVC vSAN cluster.
- Right-click the ESXi host and click Connect to connect it to the vCenter Server Inventory.
- Repeat steps 24-27 for each ESXi host until all hosts are part of the EVC vSAN cluster.
- SSH to all hosts in the cluster, invoke the below command to re-enable the vSAN unicast update
- esxcfg-advcfg -s 0 /VSAN/IgnoreClusterMemberListUpdates
- Check vSAN health check on the EVC vSAN cluster, make sure there is no network partition reported; Also check "vCenter state is authoritative" check item, if it reports any error, click the remediation button to force update vSAN unicast generation ID.
For a stretched/2-node stretch cluster, follow the below steps:
- Follow steps 1-5 above but don't enable Stretched cluster just yet, this will be done at a later step
- Create an FTT0 Storage Policy (for 2-node stretch clusters only)
- Set this policy to all VM namespace folders (for 2-node stretch clusters only)
- Follow steps 6-30 above
- Manually remove witness uncast entry from all data nodes with the following command:
- esxcli vsan cluster unicastagent remove -a <Witness_VSAN_IP>
- Deploy a new witness appliance
- Enable Stretched cluster using the newly deployed witness host
Note: Steps 2 & 3 solves a problem where the vswp object cannot be created when you power on the VM on the destination host during cluster migration. You can set the VM namespace folder back to FTT1 once the cluster migration is complete.
Note: In step 5 vSAN data will become non-compliant due to the lack of a witness component.