In certain VKS environments, guest cluster pods may fail when attempting to mount volumes. This typically manifests as repeated FailedMount warnings in pod events, with errors such as:
MountVolume.SetUp failed for volume "pvc-xxxx" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
MountVolume.SetUp failed for volume "pvc-xxxx" : rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL
Additionally, CSI pod logs may display NFS mount failures with messages like:
level=error msg="mount Failed" args="-t nfs4 -o hard,sec=sys,vers=4,minorversion=1 #####:/vsanfs/##### /var/lib/kubelet/pods/#####/volumes/kubernetes.io~csi/pvc-#####/mount" cmd=mount error="exit status 32" output="mount.nfs4: Connection refused\n"
This issue prevents pods from consuming volumes and can disrupt application availability.
VMware vSphere Kubernetes Service
The problem occurs when the Pod CIDR range of the guest cluster overlaps or conflicts with the vSAN File Services VM network CIDR.
Because of this overlap, kube‑proxy or antrea‑agent interprets the vSAN File Service IPs as internal cluster addresses. As a result, NFS mount traffic never leaves the cluster, and worker nodes cannot reach the vSAN File Service VMs. This leads to repeated mount failures and pod startup issues.
To resolve the issue:
This ensures sufficient Pod CIDRs for all nodes.
Prevents overlap with the vSAN File Services VM network (x.x.x/24).
Once the new cluster is provisioned, redeploy the application.
After re‑provisioning with a non‑conflicting Pod CIDR, worker nodes can properly route traffic to vSAN File Service IPs, allowing successful NFS mounts and restoring pod functionality.