Workload Cluster nodes are recreated after attaching the cluster in TMC-SM
search cancel

Workload Cluster nodes are recreated after attaching the cluster in TMC-SM

book

Article ID: 415656

calendar_today

Updated On:

Products

VMware Tanzu Mission Control

Issue/Introduction

When a workload cluster is attached to Tanzu Mission Control Self-Managed (TMC-SM), a rolling update is automatically triggered for all control plane and worker nodes.

TMC-SM Official Document - Attaching an existing cluster

Environment

  • Tanzu Kubernetes Grid
  • Tanzu Mission Control Self-Managed

Cause

A rolling update is triggered when Local Image Registry Configuration or Proxy Configuration settings differ between the existing cluster configuration and the TMC-SM configuration.
This behavior occurs only for ClusterClass-based clusters. It does not occur for Plan-based clusters.

Resolution

Verify that the "Local Image Registry" and "Proxy Configuration" in TMC-SM match those in the existing workload cluster before attaching the workload cluster.
If the configurations differ, a rolling update will be kicked to configure those settings.
To avoid triggering a rolling update, refrain from configuring the "Local Image Registry" or "Proxy settings" via TMC-SM.

# How to check existing cluster settings
kubectl config use-context <Management Cluster Context>
kubectl get cluster -A
CLUSTER=<Target Workload Cluster name>

# Check "Local Image Registry Configuration"
kubectl get cluster ${CLUSTER} -ojson | jq '.spec.topology.variables[] | select(.name=="imageRepository")'

# Check "Proxy Configuration"
kubectl get cluster ${CLUSTER} -ojson | jq '.spec.topology.variables[] | select(.name=="proxy")'