Guidelines/Limitations for Cloud Native Storage (CNS) Relocate on vSphere and known issues
search cancel

Guidelines/Limitations for Cloud Native Storage (CNS) Relocate on vSphere and known issues

book

Article ID: 313416

calendar_today

Updated On:

Products

VMware vCenter Server

Issue/Introduction

Symptoms:

This article highlights the guidelines and limitations related to the CNS Relocation Volume feature in Virtual Storage Lifecycle Management. 

 


Environment

VMware vCenter Server 7.0.x
VMware vCenter Server 8.0.x

Resolution

1. Location of the VMDK of CNS volume on the datastore

A CNS block volume is backed by a First Class Disk(FCD). The FCDs can be found in different directories after CNS volume relocation. Possible locations are,

  • “FCD” directory under a datastore.
  • Under VM’s folder - in case the volume is attached to a VM.
  • User defined location in the datastore.

    For example, Path - [sharedVmfs-0] fcd/1b325557eb7f4a0991fb47ab1f32f74d.vmdk
    If it's attached to a VM, FCD shows up in the VM folder, like following:
    [vsanDatastore] 2661c85f-a78f-8ac2-15c5-02003c40bc9f/2volVM_1.vmdk

2. Guidelines for CNS Relocate Volume

  • For Kubernetes clusters deployed in a single zone(or fault domain), the CNS solution provisions volume on a shared datastore(compatible with storage policy if applicable) accessible to all the nodes of the Kubernetes cluster. However, the zone may have non-shared datastores as well. While relocating a CNS volume, the user needs to make sure that the accessibility of the volume is maintained to the Kubernetes nodes.
For example, Moving a volume from shared storage to non-shared storage will make the volume inaccessible to the Kubernetes cluster and can affect the apps deployed to the Kubernetes cluster.
  • For Kubernetes clusters deployed across multiple zones or fault domains, the CNS solution provisions volume to a zonal datastore(accessible within a zone only) or a cross-zonal datastore(accessible to multiple zones)  based on the storage policy. While relocating a CNS volume, the user needs to make sure that the accessibility of the volume is maintained to the Kubernetes nodes.
For example, If a volume is placed on zonal-datastore-1 accessible to all the hosts in zone-1, then the volume should be relocated to another zonal datastore(say zonal-datastore-2) such that the same hosts can access zonal-datastore-2 as well.
  • The same guideline applies to cross-zonal datastore where the user should make sure the target cross zonal datastore for volume relocation has the same host accessibility as the source cross zonal datastore. If this is not taken care of, the Kubernetes app may not come up if the volume cannot be attached to the app.
For example, moving a CNS volume from a cross-zonal datastore to a zonal datastore is problematic as the accessibility of the volume is reduced to only a single zone.

3. CNS relocation of attached volume acquires VM lock

If a volume is attached to a VM, relocating such a volume acquires a lock on the VM. If the volume is attached to a Kubernetes node VM (ex: a TKG node VM or an OpenShift node VM, etc), and if the user starts relocation of such a volume, vSphere does not allow any other control operations on the VM like attaching/detaching other volumes to the VM, migrating the VM to a different datastore, changing any configuration of the VM, etc.

For example, say we have 2 stateful pods running in a Kubernetes node VM(say VM-1). Say each stateful Pod has 1 volume attached, so we have 2 volumes attached to VM-1. If we relocate volume-1 and at the same time if stateful pod-2 crashes and gets rescheduled by the Kubernetes scheduler to a different Kubernetes node VM, the volume-2 cannot be detached from VM-1 successfully until volume-1 relocate operation is complete. This is because volume-1 relocation acquires a lock on VM-1 for the entire duration of the relocation operation. After volume-1 is relocated successfully, the VM lock is released and another operation on the VM will go through. In summary, the working of volume relocation can affect other apps in the Kubernetes layer in some cases.

4. CNS does not support PMEM datastores

The current CNS solution does not support PMEM storage. Attempting to relocate a CNS volume to a PMEM datastore will fail with “A specified parameter was not correct: A PMem datastore cannot be specified explicitly.” error.

5. Cannot migrate container volume from the vSphere UI due to insufficient space

The Migrate volume dialog in vSphere Client considers the current size of the volume and the free space available on the target datastores. If the free space of a datastore is less than the volume size, then the datastore cannot be selected for for migration. 

vSphere preserves the storage policy as it migrates the volume, but the file size of the volume might change depending on the destination datastore type and its supported features. The vSphere UI might show the datastore has insufficient space to perform the migration, even though enough space is available.

6. Known issues with CNS Relocation Volume:

  1. Unexpected change of vmdk name and file size after volume relocation in vsan and vsan direct datastores (313292) 
  2.  Effect of changing volume accessibility by volume relocation (313312)
  3.  Performance issue observed while CNS volume relocation (313310)
  4.  Inability to relocate attached CNS volumes in vSphere with Tanzu (313308)
  5.  CNS Volume relocation from vSAN datastore to vVol could fail (313309)