$kubectl get clusterrole system:public-info-viewer -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: "2023-09-13T16:19:37Z"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:public-info-viewer
resourceVersion: "87"
uid: ########-####-####-####-#########
rules:
- nonResourceURLs:
- /healthz
- /livez
- /readyz
- /version
- /version/
verbs:
- get
The https://kubernetes.io/docs/reference/access-authn-authz/rbac/#auto-reconciliation mentioned the following approach, please read carefully the warning and caution from the Kubernetes documentation before proceeding to the change:
At each start-up, the API server updates default cluster roles with any missing permissions, and updates default cluster role bindings with any missing subjects. This allows the cluster to repair accidental modifications, and helps to keep roles and role bindings up-to-date as permissions and subjects change in new Kubernetes releases.
"To opt out of this reconciliation, set the rbac.authorization.kubernetes.io/autoupdate annotation on a default cluster role or rolebinding to false. Be aware that missing default permissions and subjects can result in non-functional clusters."
After running a test, and upgrade operation and recreation of control planes the applied change was preserved and was not autoupdated
Edit the cluster role and change the line in bold (rbac.authorization.kubernetes.io/autoupdate) from true to false and remove the /version and /version/ from the rules :
kubectl get clusterrole system:public-info-viewer -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "false"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:public-info-viewer
rules:
- nonResourceURLs:
- /healthz
- /livez
- /readyz
verbs:
- get