When a cluster is deployed or attached with a NoProxy configuration provided in the TMC UI, the sync-agent pod fails to deploy with the following error:
I0401 17:10:49:12.146983 1 request.go:601] Waited for 1.043432954s due to client-side throttling, not priority and fairness, request: GET:https://100.10.0.0:443/apis/crd.antrea.io/v1alpha2?timeout=32s
{"component":"resource-limit-map","level":"info","msg":"starting new resource-limit-map","time":"2025-04-01T17:10:49Z"}
{"level":"info","msg":"using explicit proxy for TMC communication","sub-component":"tmc-context","time":"2025-04-01T17:10:49Z"}
{"error":"could not connect to TMC endpoint: context deadline exceeded","level":"error","msg":"unable to start resourceLimitMap","time":"2025-04-01T17:11:30Z"}
You may notice that when attempting to visualize the installed Tanzu packages for the affected cluster, the following message appears in the TMC UI:
Tanzu standard repository packages are currently unavailable for the selected cluster. The repository may not be syncing properly. You can check the status of the repository here
TMC Self-Managed versions older than 1.4.1
This issue is caused by a known limitation in TMC Self-Managed, where TMC extensions do not honour the NoProxy list if it is provided during cluster configuration.
The issue is resolved in TMC Self-Managed 1.4.1 and later
As a workaround for the issue, you can follow the below steps:
On the affected cluster, find the "tmc-proxy-secret" secret:
kubectl get secret -n vmware-system-tmc | grep tmc-proxy-secret
Take a backup of the secret, and then delete it:
kubectl get secret -n vmware-system-tmc tmc-proxy-secret -o yaml > tmc-proxy-secret-backup.yaml
kubectl delete secret -n vmware-system-tmc tmc-proxy-secret
Once the secret is deleted, the TMC pods will begin to deploy, and the cluster will successfully initialize with TMC.
After upgrading to TMC Self-Managed 1.4.1 or later, you can reapply the original secret:
kubectl apply -f tmc-proxy-secret-backup.yaml