When a workload cluster is attached to Tanzu Mission Control Self-Managed (TMC-SM), a rolling update is automatically triggered for all control plane and worker nodes.
A rolling update is triggered when Local Image Registry Configuration or Proxy Configuration settings differ between the existing cluster configuration and the TMC-SM configuration.
This behavior occurs only for ClusterClass-based clusters. It does not occur for Plan-based clusters.
Verify that the "Local Image Registry" and "Proxy Configuration" in TMC-SM match those in the existing workload cluster before attaching the workload cluster.
If the configurations differ, a rolling update will be kicked to configure those settings.
To avoid triggering a rolling update, refrain from configuring the "Local Image Registry" or "Proxy settings" via TMC-SM.
# How to check existing cluster settings
kubectl config use-context <Management Cluster Context>
kubectl get cluster -A
CLUSTER=<Target Workload Cluster name>
# Check "Local Image Registry Configuration"
kubectl get cluster ${CLUSTER} -ojson | jq '.spec.topology.variables[] | select(.name=="imageRepository")'
# Check "Proxy Configuration"
kubectl get cluster ${CLUSTER} -ojson | jq '.spec.topology.variables[] | select(.name=="proxy")'