Error: "unable to format and mount device []" when attempting to create and use a new persistent volume on a TKG cluster
search cancel

Error: "unable to format and mount device []" when attempting to create and use a new persistent volume on a TKG cluster

book

Article ID: 390590

calendar_today

Updated On:

Products

VMware Cloud Director

Issue/Introduction

  • Attempting to create and use a new persistent volume on a Tanzu Kubernetes Grid (TKG) cluster deployed by Cloud Director Container Service Extension (CSE).
  • persistent volume fails to mount in the pod and checking the CSI node plugin logs for the Nodes where the pod is deployed there are errors of the form:

    <time_stamp> 1 node.go:520] Encountered error while processing file [/dev/sdb]: [exit status 1]
    <time_stamp> 1 node.go:521] Please check if the `disk.enableUUID` parameter is set to 1 for the VM in VC config.
    <time_stamp> 1 node.go:542] Obtained matching disk []
    <time_stamp> 1 node.go:189] Mounting device [] to folder [/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/########/globalmount] of type [ext4] with flags [[rw]]
    <time_stamp> level=info msg="attempting to mount disk" fsType=ext4 options="[rw defaults]" source= target=/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/########/globalmount
    <time_stamp> level=info msg="mount command" args="-t ext4 -o rw,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/########/globalmount" cmd=mount
    <time_stamp> level=error msg="mount Failed" args="-t ext4 -o rw,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/########/globalmount" cmd=mount error="exit status 1" output="mount: /var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/########/globalmount: can't find in /etc/fstab.\n"
    <time_stamp> level=info msg="checking if disk is formatted using lsblk" args="[-n -o FSTYPE ]" disk=
    <time_stamp> level=error msg="failed to determine if disk is formatted" disk= error="exit status 32"
    <time_stamp> 1 driver.go:190] GRPC error: function [/csi.v1.Node/NodeStageVolume] req [&csi.NodeStageVolumeRequest{VolumeId:"pvc-########-####-####-####-############", PublishContext:map[string]string{"diskID":"pvc-########-####-####-####-############", "diskUUID":"########-####-####-####-############", "filesystem":"ext4", "vmID":"<node_vm_name>"}, StagingTargetPath:"/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/########/globalmount", VolumeCapability:(*csi.VolumeCapability)(0x##########), Secrets:map[string]string(nil), VolumeContext:map[string]string{"busSubType":"VirtualSCSI", "busType":"SCSI", "diskID":"urn:vcloud:disk:########-####-####-####-############", "filesystem":"ext4", "storage.kubernetes.io/csiProvisionerIdentity":"############-####-named-disk.csi.cloud-director.vmware.com", "storageProfile":"<storage_policy_name>"}, XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]: [rpc error: code = Internal desc = unable to format and mount device [] at path [/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/########/globalmount] with fs [ext4] and flags [[rw]]: [exit status 32]]

  • The Named Disk is created in Cloud Director and attached to the Node VMs.
  • Node VMs that constitute the TKG cluster do not have a disk.enableUUID parameter set to TRUE in the Advanced Parameter settings in the vSphere UI under Edit Settings > Advanced Parameters.

Environment

  • VMware Cloud Director 10.x
  • VMware Cloud Director Container Service Extension 4.x

Cause

When Container Service extension deploys the Control Plane and Node VMs that constitute the TKG cluster, it sets a disk.enableUUID parameter set to TRUE on these VMs at the Cloud Director and subsequently vSphere level.
This Advanced Parameter is required for the volume operations to succeed and if it is removed or is missing then the volume operations will fail.

Resolution

To resolve the issue the affected Node VMs must have the disk.enableUUID parameter set to TRUE in their Advanced Parameter settings.

See the vSphere documentation for more details on adding Advanced Parameters, Configure Virtual Machine Advanced File Parameters.