Using VSAN File System with Container Service Extension in Cloud Director to create RWX Volumes
This article describes the steps to configure Persistent Volumes that can be used for Read-Write-Many Storage(RWX) in Kubernetes Clusters managed by Container Service Extension in Cloud Director.
The configuration requires the K8s cluster to have network access to vCenter and the VSAN FS services, and the tenant users need vCenter credentials for the Persistent Volume configuration. Therefore, this architecture is not recommended in shared vSphere environments, but rather for dedicated vSphere clusters per tenant. Figure 1 demonstrates how multiple users from a customer organization with multiple Kubernetes Clusters access VSAN FS.
Figure 1 RWX with VSAN FS for dedicated VSAN Instance
Requirements
The RWX Volumes on VSAN FS require minimum releases of vSphere and VCD as follows. Please use the product interop matrix recommendation for all other Infrastructure components.
• vSphere 7+ with VSAN File Services enabled.
• VCD 10.4+ with Container Service Extension 4.0+
Provider Setup steps
Configure a NFS share on VSAN FS that will be used for the K8s cluster.
Create a user for the vSphere Container Storage Plug-in:
Tenant steps
1) Deploy a K8s cluster in Cloud Director (using the Container Services extension UI)
2) Install vSphere Container Storage Plug-in
Deploy the vSphere Container Storage Plug-in into the cluster, following the steps described in
https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/3.0/vmware-vsphere-csp-getting-started/GUID-6DBD2645-FFCF-4076-80BE-AD44D7141521.html
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: vsan-file-sc
provisioner: csi.vsphere.vmware.com
parameters:
storagepolicyname: "vSAN Default Storage Policy"
csi.storage.k8s.io/fstype: nfs4
run kubectl apply -f sc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-vanilla-file-pvc
spec:
storageClassName: vsan-file-sc
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
kubectl apply -f pvc.yaml
kubectl describe pvc
kubectl get pv
‘Untaint’ the Control Plane Nodes
kubectl taint nodes jarvis-01-control-plane-node-pool-bnf9p node-role.kubernetes.io/control-plane=:NoSchedule-
Validate the Read-Write-Many Volume
To validate the functionality, create 2 pods accessing the same Volume:
create pod1.yaml
apiVersion: v1
kind: Pod
metadata:
name: block-pod-a
spec:
containers:
- name: block-pod-a
image: "k8s.gcr.io/busybox"
volumeMounts:
- name: block-vol
mountPath: "/mnt/volume1"
command: [ "sleep", "1000000" ]
volumes:
- name: block-vol
persistentVolumeClaim:
claimName: example-vanilla-file-pvc
kubectl apply -f pod1.yaml
kubectl describe pod block-pod-a
Create the second Pod: edit pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: block-pod-b
spec:
containers:
- name: block-pod-b
image: "k8s.gcr.io/busybox"
volumeMounts:
- name: block-vol
mountPath: "/mnt/volume1"
command: [ "sleep", "1000000" ]
volumes: - name: block-vol
persistentVolumeClaim:
claimName: example-vanilla-file-pvc
kubectl apply -f pod2.yaml
kubectl describe pod block-pod-b
Create a file on the volume in Pod A
kubectl exec -it block-pod-a -- /bin/sh
create file in /mnt/volume1
exit
Access the file from Pod B
kubectl exec -it block-pod-b -- /bin/sh
validate file exists in /mnt/volume1
References:
https://cormachogan.com/2020/03/27/read-write-many-persistent-volumes-with-vsan-7-file-services/