Cannot login to vSphere with Tanzu TKC guest cluster after renewing vCenter machine certificates with error "the server has asked for the client to provide credentials"
search cancel

Cannot login to vSphere with Tanzu TKC guest cluster after renewing vCenter machine certificates with error "the server has asked for the client to provide credentials"

book

Article ID: 370252

calendar_today

Updated On:

Products

vSphere with Tanzu VMware Tanzu Kubernetes Grid Service (TKGs)

Issue/Introduction

After renewing vCenter machine certificates, there is a problem connecting to a Kubernetes guest cluster. 

The following error is received from the jumpbox when trying to gain access to the guest cluster. 

<user>@tanzu-virtual-machine:/tmp$ kubectl get nodes
error: You must be logged in to the server (the server has asked for the client to provide credentials)

Running the same command with trace option.

<user>@tanzu-virtual-machine:/tmp$ kubectl get nodes -v 10

{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
E0613 09:19:04.332631 3236672 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
I0613 09:19:04.332698 3236672 cached_discovery.go:120] skipped caching discovery info due to the server has asked for the client to provide credentials
I0613 09:19:04.332877 3236672 helpers.go:246] server response object: [{
"metadata": {},
"status": "Failure",
"message": "the server has asked for the client to provide credentials",
"reason": "Unauthorized",
"details": {
"causes": [
{
"reason": "UnexpectedServerResponse",
"message": "unknown"
}
]
},
"code": 401
}]
error: You must be logged in to the server (the server has asked for the client to provide credentials)

Environment

  • VMware vSphere with Tanzu 7.x
  • VMware vSphere with Tanzu 8.x

Cause

After renewing vCenter machine certificates , the guest-cluster-auth pods retain the old certificates' thumbprint until they are restarted. 

Resolution

This will be fixed in future releases of the guest cluster.

To workaround this, restart the Guest Cluster authentication pods in the Guest Cluster:

  1. Connect to the failing cluster node via ssh and export the kubeconfig admin file by running this command: export KUBECONFIG=/etc/kubernetes/admin.conf

  2. Run this command to list the auth pods: kubectl get pods -A | grep cluster-auth -w 
     
  3. Delete the pod with this command:

    kubectl delete pod -n vmware-system-auth guest-cluster-auth-svc-xxxx

  4. Wait a few moments for the pod to be recreated. It will have a new age and new suffix. 

    kubectl get pods -A | grep cluster-auth -w 

  5. On the jump box, delete the old kubeconfig file and recreate it using kubectl vsphere login.