After upgrade of Ubuntu based classy clusters to 22.04 TKGm 2.5.1 the NFS client is no longer available (installed by default).
kubectl describe node $NODE
#> MountVolume.SetUp failed for volume "######" : mount failed: exit status 32 Mounting command: mount Mounting arguments: -t nfs #####:/#####/ /var/lib/kubelet/pods/####/volumes/kubernetes.io~nfs/#### Output: mount: /var/lib/kubelet/pods/####/volumes/kubernetes.io~nfs/####: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program.
NFS client packages were removed on 2.5.1 due to the NFS usage of RPCbind (which is bound on port 111 by default); it was disabled to comply with the Ubuntu CIS Benchmark C-2.3.6.
TKGm does not support external NFS, and so far, there have been no reports of usage. NFS is only supported in the datastore via CSI.
Installing it back will regress a security issue and is out of the scope.
There are two alternatives to toggle the hardening procedures:
1) Bring Your Own Image, change the variables passed (adding these packages), and export a new template that can be used - this is more complex procedure and is not covered with this article
2) Create a custom ClusterClass, to install the package before the node joins the cluster (preKubeadmCommand) - Procedure below
Prerequisites
TKGm management cluster is created (tested with 2.5)
ytt installed
kubectl installed and set to management cluster context
Tanzu CLI
This process involves creating a custom cluster class that allows for deploying worker nodes in a workload cluster with NFS common utils installed. Creating custom clusterclasses is roughly documented here.
cp ~/.config/tanzu/tkg/clusterclassconfigs/tkg-vsphere-default-v1.2.0.yaml .
mkdir overlays
cd overlays
Return to top folder after creating the files: cd ..
Create a file nfscommon.yaml:
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind":"ClusterClass"})
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
metadata:
name: tkg-vsphere-default-v1.2.0-extended
spec:
#@overlay/match missing_ok=True
variables:
#@overlay/append
- name: nfsCommon
required: false
schema:
openAPIV3Schema:
type: boolean
default: false
#@overlay/match expects=1
patches:
#@overlay/append
- name: nfs
enabledIf: '{{ .nfsCommon }}'
definitions:
- selector:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
matchResources:
machineDeploymentClass:
names:
- tkg-worker
jsonPatches:
- op: add
path: /spec/template/spec/preKubeadmCommands/-
value: |
sudo add-apt-repository -s https://mirrors.bloomu.edu/ubuntu/ jammy main [mirrors.bloomu.edu] -y && \
sudo apt update -y && \
sudo apt-get install -y libnfsidmap1=1:2.6.1-1ubuntu1 --allow-downgrades --allow-change-held-packages && \
sudo apt-get install -y nfs-common --allow-change-held-packages
Create a file filter.yaml:
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.not_op(overlay.subset({"kind": "ClusterClass"})),expects="0+"
---
#@overlay/remove
ytt -f tkg-vsphere-default-v1.2.0.yaml -f overlays/filter.yaml > default_cc.yaml
ytt -f default_cc.yaml -f overlays/ > custom_cc.yaml
kubectl apply -f custom_cc.yaml
NAME AGE
tkg-vsphere-default-v1.2.0 21h
tkg-vsphere-default-v1.2.0-extended 20h
In order to create a new cluster with the custom class follow the below steps where the cluster_overlay.yaml is visible below:
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind":"Cluster"})
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
spec:
topology:
class: tkg-vsphere-default-v1.2.0-extended
variables:
- name: nfsCommon
value: true
For existing clusters the procedure is similar where we have to update two fields on existing cluster:
spec:
...
topology:
class: tkg-vsphere-default-v1.2.0-extended
controlPlane:
metadata:
annotations:
run.tanzu.vmware.com/resolve-os-image: image-type=ova,os-name=ubuntu
replicas: 1
variables:
- name: nfsCommon
value: true
- name: cni
value: antrea
- name: controlPlaneCertificateRotation
...
The update will trigger immediate update of the worker nodes and will recreate the workers with Nfs-common
Troubleshooting steps in case the worker nodes are recreated but the NFS client is not installed verify with commands below to confirm if the package was installed successfully
Used the below git page as a guide for this KB
In case of emergencies, you can manually install the NFS packages temporarily on the worker node.
ssh capv@${WORKER_NODE_IPADDRESS}
sudo add-apt-repository -s https://mirrors.bloomu.edu/ubuntu/ jammy main [mirrors.bloomu.edu] -y
sudo apt update -y
sudo apt-get install -y libnfsidmap1=1:2.6.1-1ubuntu1 --allow-downgrades --allow-change-held-packages
sudo apt-get install -y nfs-common --allow-change-held-packages