Issue attaching GKE Private cluster with TMC
search cancel

Issue attaching GKE Private cluster with TMC


Article ID: 327451


Updated On:




This article will help to avoid issue while attaching GKE Private cluster with TMC.

Cluster is showing healthy status on TMC and also all agents showing UP status but nothing is showing under Node, Namespace and workload tab.

GKE TMC1.pngGKE TMC2.png


The APIService on the cluster is failing discovery check because it's not able to connect to the backend service cluster-auth-pinniped-api. This is because the pods expose port 8443, and on a private GKE cluster the default firewall rules allow apiserver to only have access to ports 10250 and 443 on the worker nodes.


You need to create a firewall rule in GCP to allow exposing port 8443 from master to worker nodes.

gcloud compute firewall-rules create pinniped-apiservice-rule \                                                                                                           
  --source-ranges \
  --target-tags $WORKER_NODES_TAG  \
  --allow TCP:8443 --network $NETWORK

You can also edit the existing firewall rule which has a name - gke-<cluster-name>-<uid>-master to include the tcp port 8443 along with the existing 10250/443 ports. 

Additional Information:-
If the Private cluster with "Public endpoint access enabled" then one has to just disable the "authorized networks" to attach the cluster without any fail.

Additional Information

It will lead to fail the below api-service with "FailedDiscoveryCheck" error vmware-system-tmc/cluster-auth-pinniped-api False (FailedDiscoveryCheck) vmware-system-tmc/cluster-auth-pinniped-api False (FailedDiscoveryCheck)