Bulk repave of VDI through VRA take long time
search cancel

Bulk repave of VDI through VRA take long time

book

Article ID: 421191

calendar_today

Updated On:

Products

VMware vSAN

Issue/Introduction

  • In VDI environments, the repave task for virtual desktops takes significantly longer than expected when initiated through automation workflows.
  • The issue is not isolated to a single vSAN cluster and is observed across multiple vSAN clusters.

  • The repave process includes the following steps:

    • Power OFF the VDI virtual machine

    • Deletion of the OS VMDK

    • Copy of OS VMDK from the template

    • Attach the new copied VMDK to the VDI virtual machine

    • Power ON the VDI virtual machine

  • Repave operations take excessive time to complete.

  • Storage policies in the environment are configured as follows:

    • VDI Virtual Machine: <cluster_name>_performance_v1 (RAID-1 vSAN storage policy)

    • Datastore Default Policy: <cluster_name>_protected_v1 (RAID-6 vSAN storage policy)

Environment

VMware vSAN 7.x (OSA)

VMware vSAN 8.x (OSA)

Cause

During the repave operation, a vSAN resynchronization is triggered because the storage policy assigned to the template does not match the default storage policy of the vSAN datastore. Additionally, increased read latency is observed on the host where the source template resides, as all target hosts perform read operations from the same source during the VMDK copy process. This also results in high network throughput utilization during bulk repave activities.

 

Resolution

 

  • Ensure the storage policy applied to the template matches the vSAN datastore storage policy before starting the repave process.

  • Limit the number of VMs being repaved simultaneously, keeping in mind the maxcopyvmdkpercluster limit and the fact that the source template is hosted on a single ESXi host.

  • Monitor host read latency and network throughput during repave operations to prevent saturation.