This article provides a daemonset that can be applied on Guest Clusters to update the vmware-system-user password expiry, allowing SSH sessions to Guest Cluster nodes if required
Symptoms:
VMware vSphere 7.0 with Tanzu
Change the vmware-system-user password expiry on Existing Clusters using the following daemonset:
# cat <<EOF>> pass_expiry.yaml
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cluster-admin
spec:
selector:
matchLabels:
tkgs: cluster-admin
template:
metadata:
labels:
tkgs: cluster-admin
spec:
volumes:
- name: hostfs
hostPath:
path: /
initContainers:
- name: init
image: ubuntu:23.04
command:
- /bin/sh
- -xc
- |
chroot /host chage -l vmware-system-user \
&& chroot /host chage -m 0 -M -1 vmware-system-user \
&& echo expiry updated \
&& chroot /host chage -l vmware-system-user \
&& echo done
volumeMounts:
- name: hostfs
mountPath: /host
containers:
- name: sleep
image: localhost:5000/vmware.io/pause:3.6
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
key: node.alpha.kubernetes.io/notReady
operator: Exists
- effect: NoExecute
key: node.alpha.kubernetes.io/unreachable
operator: Exists
- effect: NoSchedule
key: kubeadmNode
operator: Equal
value: master
EOF
kubectl vsphere login --insecure-skip-tls-verify --server <SUPERVISOR_VIP> --tanzu-kubernetes-cluster-namespace <GUEST_CLUSTER_NAMESPACE> --tanzu-kubernetes-cluster-name <GUEST_CLUSTER_NAME>
kubectl apply -f pass_expiry.yaml
kubectl get ds cluster-admin -n <namespace>
kubectl get pods -n <namespace> | grep cluster-admin
kubectl describe pod <cluster-admin-pod> -n <namespace>
vi pass_expiry.yaml
napp-k apply -f pass_expiry.yaml
napp-k get ds cluster-admin
napp-k get pods | grep cluster-admin
napp-k describe pod <cluster-admin-pod>
vmware-system-user
account to never expire on the control plane (master) nodes.-
effect: "NoSchedule"
key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
The control plane node is tainted with the node-role.kubernetes.io/control-plane key, rather than being specifically tainted as a master node. All other tolerations are left unchanged.
Back-off pulling image "localhost:5000/vmware/pause:3.6"
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreate 49m daemonset-controller Error creating: pods "cluster-admin-bdrkf" is forbidden: violates PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "init", "sleep" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "init", "sleep" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "hostfs" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "init", "sleep" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "init", "sleep" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
In those situations, apply the Pod security setting as below for the 'default' namespace and re-apply the daemonset to fix the issue.
kubectl label --overwrite ns default pod-security.kubernetes.io/enforce=privileged